Labels

Thursday 29 August 2013

How do the Postgres foreign key 'on update' and 'on delete' options work?

We know that the foreign keys disallow creation of orders that do not relate to any products. But what if a product is removed after an order is created that references it? SQL allows you to handle that as well. Intuitively, we have a few options:
Disallow deleting a referenced product
Delete the orders as well
Something else?

CREATE TABLE order_items (
 product_no integer REFERENCES products ON DELETE RESTRICT,
 order_id integer REFERENCES orders ON DELETE CASCADE,
 quantity integer,
 PRIMARY KEY (product_no, order_id)
);
Restricting and cascading deletes are the two most common options. RESTRICT prevents deletion of a referenced row. NO ACTION means that if any referencing rows still exist when the constraint is checked, an error is raised; this is the default behavior if you do not specify anything. (The essential difference between these two choices is that NO ACTION allows the check to be deferred until later in the transaction, whereas RESTRICT does not.) CASCADE specifies that when a referenced row is deleted, row(s) referencing it should be automatically deleted as well. There are two other options: SET NULL and SET DEFAULT. These cause the referencing columns to be set to nulls or default values, respectively, when the referenced row is deleted. Note that these do not excuse you from observing any constraints. For example, if an action specifies SET DEFAULT but the default value would not satisfy the foreign key, the operation will fail.
Analogous to ON DELETE there is also ON UPDATE which is invoked when a referenced column is changed (updated). The possible actions are the same.
edit: You might want to take a look at this related question: When/Why to use Cascading in SQL Server?. The concepts behind the question/answers are the same.

 Source :  http://stackoverflow.com/questions/225881/how-do-the-postgres-foreign-key-on-update-and-on-delete-options-work

Wednesday 21 August 2013

SELECT TOP

In SQL Server, TOP may be used to return the first n number of rows in a query. For example,
SELECT TOP 100 * FROM users ORDER BY id
might be used to return the first 100 people that registered for a site. (This is not necessarily the best way, I am just using it as an example).My question is - What is the equivalent to TOP in other databases, such as Oracle, MySQL, PostgreSQL, etc? If there is not an equivalent keyword, what workarounds can you recommend to achieve the same result?



9 Answers



To select first 100 rows:
MySQL and PostgreSQL:
 
SELECT    *
FROM      Table
ORDER BY  column        
LIMIT 100

Oracle:
SELECT  *
FROM    (
        SELECT  t.*
        FROM    table
        ORDER BY
                column
        )
WHERE   rownum <= 100
Note that you need a subquery here. If you don't add a subquery, ROWNUM will select first 10 rows in random order and then sort them by column.
To select rows between 100 and 300:
MySQL:
SELECT  *
FROM    TABLE
ORDER BY
        column
LIMIT   100, 200
PostgreSQL:
SELECT  *
FROM    Table
ORDER BY
        column
OFFSET 100 LIMIT 200
Oracle:
SELECT  *
FROM    (
        SELECT  t.*, ROW_NUMBER() OVER (ORER BY column) AS rn
        FROM    table
        )
WHERE   rn >= 100
        AND rownum <= 200
Note that an attempt to simplify it with ROWNUM BETWEEN 100 AND 200 (as opposed to rn BETWEEN 100 AND 200 in the outer query) will return nothing in Oracle!
RN BETWEEN 100 AND 200 will work in Oracle too but is less efficient.

Saturday 17 August 2013

JavaFX 2 - Mastering the TreeView

JavaFX 2 - Mastering the TreeView
The TreeView is a rely good component for presenting a tree structure, but it can be a bit tricky to get the different part worked out.

In this post we will create a demo which let us:
  • Lazy loading the children of the nodes in the tree.
  • Using the a ChangeListener to print the of the selected nodes text.
  • Adding new nodes to the tree.

The simple demo which we will create:



First out, creating the components which is used in the demo

  1. /** Create and setup all the components used in this demo. */  
  2.  private void initComponents() {  
  3.   LazyTreeItem rootItem = new LazyTreeItem("Root"0);  
  4.   rootItem.setExpanded(true);  
  5.   treeView = new TreeView<string>(rootItem);  
  6.   treeView.getSelectionModel().setSelectionMode(SelectionMode.SINGLE);  
  7.   treeView.getSelectionModel().selectedItemProperty().addListener(this);  
  8.     
  9.   labelSelectedItem = new Label("Selected:");  
  10.     
  11.   labelErrorText = new Label("Error: ");  
  12.     
  13.   inputNewItem = new TextField();  
  14.     
  15.   buttonNewItem = new Button("Insert item");  
  16.   buttonNewItem.setOnAction(this);  
  17.  }  
Row 3: we start by creating a root node which we will be using as the root in our tree. The node is a class which we will be creating to be able to load the children of the node on request (lazy).

Row 4: Making the root node expanded will render the root expanded with all children visible.

Row 5: Creating the TreeView with our root node as the root for the tree.

Row 6: By using the trees selection model we set the tree to only support one selected node at each time. This can be configured to be multiple nodes, but in this demo we are only interested in one node at time.

Row 7: By adding a ChangeListener to the selection model we can track when we change the selected node in the tree. More on the ChangeListener later. For now just notice how we add the listener to the tree.

Row 9 and on is the other components which are used in the demo. Nothing fancy about them.

Laying out the demo

  1. @Override  
  2.   public void start(Stage stage) throws Exception {  
  3.   initComponents();  
  4.     
  5.   stage.setTitle("Demo of TreeView");  
  6.     
  7.   BorderPane mainPane = new BorderPane();  
  8.     
  9.   mainPane.setLeft(treeView);  
  10.     
  11.   VBox centerPane = new VBox();  
  12.   centerPane.setSpacing(3);  
  13.   centerPane.setPadding(new Insets(3));  
  14.   centerPane.getChildren().addAll(labelSelectedItem, labelErrorText, inputNewItem, buttonNewItem);  
  15.   mainPane.setCenter(centerPane);  
  16.     
  17.   Scene scene = new Scene(mainPane, 600400);  
  18.     
  19.   stage.setScene(scene);  
  20.   stage.sizeToScene();  
  21.   stage.show();    
  22.  }  

We are using a simple BorderPane where we place the TreeView to the left, and the rest of the components within a VBox in the center section.

Creating lazy loading of the children in the tree

So far things has been straight forward and without any fancy stuff. However to solve the lazy loading of the child nodes we need to dig into the TreeItem a bit. The idea of the lazy loading is to only load the child nodes when we need them and this is made by creating a class which we extend with TreeItem and overload some functions.

  1. private class LazyTreeItem extends TreeItem<String> {  
  2.   /** The depth of this tree item in the {@link TreeView}. */  
  3.   private final int depth;  
  4.   /** Control if the children of this tree item has been loaded. */  
  5.   private boolean hasLoadedChildren = false;  
  6.     
  7.   public LazyTreeItem(String itemText, int depth) {  
  8.    super(itemText);  
  9.    this.depth = depth;  
  10.   }  
  11.     
  12.   @Override  
  13.   public ObservableList<TreeItem<String>> getChildren() {  
  14.    if (hasLoadedChildren == false) {  
  15.     loadChildren();  
  16.    }  
  17.    return super.getChildren();  
  18.   }  
  19.     
  20.   @Override  
  21.   public boolean isLeaf() {  
  22.    if (hasLoadedChildren == false) {  
  23.     loadChildren();  
  24.    }  
  25.    return super.getChildren().isEmpty();  
  26.   }  
  27.     
  28.   /** Create some dummy children for this item. */  
  29.   @SuppressWarnings("unchecked"// Safe to ignore since we know that the types are correct.  
  30.   private void loadChildren() {  
  31.    hasLoadedChildren = true;  
  32.    int localDeepth = depth + 1;  
  33.    LazyTreeItem child1 = new LazyTreeItem("Child 1 (deepth = " + localDeepth + ")",  
  34. localDeepth);  
  35.    LazyTreeItem child2 = new LazyTreeItem("Child 2 (deepth = " + localDeepth + ")",  
  36. localDeepth);  
  37.    super.getChildren().setAll(child1, child2);  
  38.   }  
  39.     
  40.   /** Return the depth of this item within the {@link TreeView}.*/  
  41.   public int getDepth() {  
  42.    return depth;  
  43.   }  
  44.  }  

Row 1: Extending the TreeItem otherwise we will not be able to put our item in the TreeView.

Row 12-18: Overriding the getChildren method will provide us the ability to load the children lazy since the method is first called when the parent node is expanded. Since we can not populate the children in the constructor of the node we need to populate the children now. Populating the children in the constructor would not make the node lazy since all children would be loaded when a new instance of the node is created.

Row 20-26: Overriding the isLeaf method is necessary since it is called when the node is rendered. If we would not load the children when the function is called it would always return true, and the node would be rendered as a leaf without the ability to show any potential children.

Row 28-37: Here is where we would load our real node if this was something else then a demo. For now we are satisfied by creating two dummy children for this node.

Row 36: Using the setAll method will remove all existing children of the node and add the new children instead. If we would like to add more children to the node we instead should use addAll. More on this later when we implement a function to add new nodes to the tree.

Showing the selected item's text

When a user click on a node in the tree we like to show the text in the 'Selected' label. This is achived by using the ChangeListener which we atached to the selection model when we created the TreeView.

  1. ... implements EventHandler<ActionEvent>, ChangeListener<TreeItem<String>>  
  2.   
  3. ...  
  4.   
  5. treeView.getSelectionModel().selectedItemProperty().addListener(this);  
  6.   
  7. ...  
  8.   
  9. /** Handles change event from the {@link TreeView}. */  
  10.  @Override  
  11.  public void changed(ObservableValue<? extends TreeItem<String>> observableValue, TreeItem<String> oldItem, TreeItem<String> newItem) {  
  12.   labelSelectedItem.setText("Selected: " + newItem.getValue());  
  13.  }  

Row 1: On the class we implement the ChangeListener interface.

Row 5: We assign our listener to the selection model of the tree.

Row 11-13: This is where we implement the action we like to do when a change to the selection in the tree has happen. For this demo we are pleased with just printing the text of the node in our label.

Adding a new node to the tree

In the last part of this demo we add a function for adding a new node to the tree. By using a text input field and a button this is achieved together with the selected node in the tree. When the user click the button we take the text from the text field and create a new node in the tree.

  1. /** Handles the action which is triggered by the button. */  
  2.  @Override  
  3.  public void handle(ActionEvent event) {  
  4.   LazyTreeItem selectedItem = (LazyTreeItem) treeView.getSelectionModel().getSelectedItem();  
  5.  selectedItem.getChildren().add(new LazyTreeItem(inputNewItem.getText(), selectedItem.getDepth()));  
  6.  }  

Row 3: By implementing the interface EventHandler and connect the button to our handler.

Row 4: Using the selection model of the TreeView to get the selected item.

Row 5: Adding a new node to the tree is simple. Get the children list and insert the new node by using the add function. Remember that using the set function will remove all old children from the node.

Summary

In this demo we have learn how to use the TreeView to load the children of the nodes lazy, how we can get the information from the selected node, and how to add new nodes to the tree.

The complete source code can be fetched from: http://files.loop81.com/rest/download/6643f32b-c6d8-4816-9a3d-d062ee8ce57e.
Source: http://www.loop81.com/2011/11/javafx-20-mastering-treeview.html

Saturday 10 August 2013

Why I cannot successfully create the foreign keys?

From Navicat Wiki

Jump to: navigation, search
Apply OS : Windows, Mac, Linux
Apply Navicat Product : Navicat for MySQL, Navicat for PostgreSQL, Navicat for SQLite, Navicat for Oracle, Navicat Premium
Apply Navicat Version No. : All

 

MySQL

If you recevie MySQL Error likes 1005: Can't create table '.\mydb\#sql-44c_1c.frm'(errno: 150), it is caused by failure on creating Foreign Keys. There are some possible cases that might cause failure on creating Foreign Keys on MySQL database. These errors are related to MySQL itself.

Example cases :
  1. If the two fields (Field name and the Foreign Field name) are using incompatible field type.
  2. If you use "On Delete Set Null" but the field doesn't allow null.

To declare foreign keys in MySQL, there are few points which user should bear in mind :
  1. Both tables must be InnoDB type.
  2. In the referenced table, there must be an index where the referenced columns are listed as the first columns in the same order.
  3. Index prefixes on foreign key columns are not supported.
  4. InnoDB needs indexes on foreign keys and referenced keys so that foreign key checks can be fast and not require a table scan.
  5. The two key fields must have the compatible field type.
  6. The size and the sign of integer types has to be the same.
  7. The length of string types need not be the same.
  8. The foreign key name must be unique within the database
  9. If you specify a SET NULL action, make sure that you have not declared the columns in the child table as NOT NULL.

For more details on Foreign Key Constraints, please visit - http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html


PostgreSQL

If you recevie PostgreSQL Error likes ERROR: there is no unique constraint matching given keys for referenced table "xxxx", it is caused by failure on creating Foreign Keys.

To declare foreign keys in PGSQL, there are few points which user should bear in mind :
  1. A FOREIGN KEY constraint must refer to a PRIMARY KEY or UNIQUE constraint.
  2. The two key fields must have the compatible data type.
  3. Must have REFERENCES privilege on both the referencing and referenced tables.


Oracle

To declare foreign keys in Oracle, there are few points which user should bear in mind :
  1. A FOREIGN KEY constraint must refer to a PRIMARY KEY or UNIQUE constraint.
  2. The two key fields must have the compatible data type.
  3. Composite foreign keys are limited to 32 columns.
  4. Must have privileged access to the parent and child tables.

For more details on Foreign Key Constraints, please visit - http://download.oracle.com/docs/cd/B14117_01/appdev.101/b10795/adfns_co.htm#1006976


SQLite

To declare foreign keys in SQLite, there are few points which user should bear in mind :
  1. The parent and child keys must have the same cardinality

For more details on Foreign Key Constraints, please visit - http://www.sqlite.org/foreignkeys.html

source : http://wiki.navicat.com/wiki/index.php/Why_I_cannot_successfully_create_the_foreign_keys%3F

10 Common Mistakes Java Developers Make when Writing SQL

 
Java developers mix object-oriented thinking with imperative thinking, depending on their levels of:
  • Skill (anyone can code imperatively)
  • Dogma (some use the “Pattern-Pattern”, i.e. the pattern of applying patterns everywhere and giving them names)
  • Mood (true OO is more clumsy to write than imperative code. At first)
But when Java developers write SQL, everything changes. SQL is a declarative language that has nothing to do with either object-oriented or imperative thinking. It is very easy to express a query in SQL. It is not so easy to express it optimally or correctly. Not only do developers need to re-think their programming paradigm, they also need to think in terms of set theory.
Here are common mistakes that a Java developer makes when writing SQL (in no particular order):

1. Forgetting about NULL

Misunderstanding NULL is probably the biggest mistake a Java developer can make when writing SQL. This is also (but not exclusively) due to the fact that NULL is also called UNKNOWN. If it were only called UNKNOWN, it would be easier to understand. Another reason is that JDBC maps SQL NULL to Java null when fetching data or when binding variables. This may lead to thinking that NULL = NULL (SQL) would behave the same way as null == null (Java)
One of the crazier examples of misunderstanding NULL is when NULL predicates are used with row value expressions.
Another, subtle problem appears when misunderstanding the meaning ofNULL in NOT IN anti-joins.
The Cure:
Train yourself. There’s nothing but explicitly thinking about NULL, every time you write SQL:
  • Is this predicate correct with respect to NULL?
  • Does NULL affect the result of this function?

2. Processing data in Java memory

Few Java developers know SQL very well. The occasional JOIN, the odd UNION, fine. But window functions? Grouping sets? A lot of Java developers load SQL data into memory, transform the data into some appropriate collection type, execute nasty maths on that collection with verbose loop structures (at least, before Java 8′s Collection improvements).
But some SQL databases support advanced (and SQL standard!) OLAP features that tend to perform a lot better and are much easier to write. A (non-standard) example is Oracle’s awesome MODEL clause. Just let the database do the processing and fetch only the results into Java memory. Because after all some very smart guys have optimised these expensive products. So in fact, by moving OLAP to the database, you gain two things:
  • Simplicity. It’s probably easier to write correctly in SQL than in Java
  • Performance. The database will probably be faster than your algorithm. And more importantly, you don’t have to transmit millions of records over the wire.
The Cure:
Every time you implement a data-centric algorithm in Java, ask yourself: Is there a way to let the database perform that work for me?

3. Using UNION instead of UNION ALL

It’s a shame that UNION ALL needs an extra keyword compared to UNION. It would be much better if the SQL standard had been defined to support:
  • UNION (allowing duplicates)
  • UNION DISTINCT (removing duplicates)
Not only is the removal of duplicates rarely needed (or sometimes even wrong), it is also quite slow for large result sets with many columns, as the two subselects need to be ordered, and each tuple needs to be compared with its subsequent tuple.
Note that even if the SQL standard specifies INTERSECT ALL and EXCEPT ALL, hardly any database implements these less useful set operations.
The Cure:
Every time you write a UNION, think if you actually wanted to write UNION ALL.

4. Using JDBC Paging to page large results

Most databases support some way of paging ordered results through LIMIT .. OFFSET, TOP .. START AT, OFFSET .. FETCH clauses. In the absence of support for these clauses, there is still the possibility for ROWNUM (Oracle)or ROW_NUMBER() OVER() filtering (DB2, SQL Server 2008 and less), which is much faster than paging in memory. This is specifically true for large offsets!
The Cure:
Just use those clauses, or a tool (such as jOOQ) that can simulate those clauses for you.

5. Joining data in Java memory

From early days of SQL, some developers still have an uneasy feeling when expressing JOINs in their SQL. There is an inherent fear of JOIN being slow. This can be true if a cost-based optimiser chooses to perform a nested loop, possibly loading complete tables into database memory, before creating a joined table source. But that happens rarely. With appropriate predicates, constraints and indexes, MERGE JOIN and HASH JOIN operations are extremely fast. It’s all about the correct metadata (I cannot cite Tom Kyte often enough for this). Nonetheless, there are probably still quite a few Java developers who will load two tables from separate queries into maps and join them in Java memory in one way or another.
The Cure:
If you’re selecting from various tables in various steps, think again to see if you cannot express your query in a single statement.

6. Using DISTINCT or UNION to remove duplicates from an accidental cartesian product

With heavy joining, one can loose track of all the relations that are playing a role in a SQL statement. Specifically, if multi-column foreign key relationships are involved, it is possible to forget to add the relevant predicates in JOIN .. ON clauses. This might result in duplicate records, but maybe only in exceptional cases. Some developers may then choose to use DISTINCT to remove those duplicates again. This is wrong in three ways:
  • It (may) solve the symptoms but not the problem. It may as well not solve the symptoms in edge-cases.
  • It is slow for large result sets with many columns. DISTINCT performs an ORDER BY operation to remove duplicates.
  • It is slow for large cartesian products, which will still load lots of data into memory
The Cure:
As a rule of thumb, when you get unwanted duplicates, always review your JOIN predicates. There’s probably a subtle cartesian product in there somewhere.

7. Not using the MERGE statement

This isn’t really a mistake, but probably some lack of knowledge or some fear towards the powerful MERGE statement. Some databases know other forms of UPSERT statements, e.g. MySQL’s ON DUPLICATE KEY UPDATE clause. But MERGE is really so powerful, most importantly in databases that heavily extend the SQL standard, such as SQL Server.
The Cure:
If you’re UPSERTING by chaining INSERT and UPDATE or by chaining SELECT .. FOR UPDATE and then INSERT or UPDATE, think again. Apart from risking race conditions, you might be able to express a simpler MERGE statement.

8. Using aggregate functions instead of window functions

Before the introduction of window functions, the only means to aggregate data in SQL was by using a GROUP BY clause along with aggregate functions in the projection. This works well in many cases, and if aggregation data needed to be enriched with regular data, the grouped query can be pushed down into a joined subquery.
But SQL:2003 defined window functions, which are implemented by many popular database vendors. Window functions can aggregate data on result sets that are not grouped. In fact, each window function supports its own, independent PARTITION BY clause, which is an awesome tool for reporting.
Using window functions will:
  • Lead to more readable SQL (less dedicated GROUP BY clauses in subqueries)
  • Improve performance, as a RDBMS is likely to optimise window functions more easily
The Cure:
When you write a GROUP BY clause in a subquery, think again if this cannot be done with a window function.

9. Using in-memory sorting for sort indirections

The SQL ORDER BY clause supports many types of expressions, including CASE statements, which can be very useful for sort indirections. You should probably never sort data in Java memory because you think that
  • SQL sorting is too slow
  • SQL sorting cannot do it
The Cure:
If you sort any SQL data in memory, think again if you cannot push sorting into your database. This goes a
long well with pushing paging into the database.

10. Inserting lots of records one by one

JDBC knows batching, and you should use it. Do not INSERT thousands of records one by one, re-creating a new PreparedStatement every time. If all of your records go to the same table, create a batch INSERT statement with a single SQL statement and multiple bind value sets. Depending on your database and database configuration, you may need to commit after a certain amount of inserted records, in order to keep the UNDO log slim.
The Cure:
Always batch-insert large sets of data.

source: http://java.dzone.com/articles/10-common-mistakes-java

Saturday 3 August 2013

Study Java At...



http://edu.makery.ch/blog/2012/12/19/javafx-event-handlers-and-change-listeners/
http://www.loop81.com/
->javafx


http://sourcemaking.com/design_patterns
http://www.tutorialspoint.com/design_pattern/strategy_pattern.htm
->design pattern


others:
10/08/2013: http://java.dzone.com/articles/10-common-mistakes-java

http://howtodoinjava.com



updating...

JASPER REPORT

 

Architecture

As shown in the above figure JasperReports architecture is based on declarative XML files which by convention have an extension of jrxml that contains the report layout. A lot of third-party design tools were produced to generate your jrxml file in a smooth way (like iReport or JasperAssistant) Design file is supposed to be filled by report's result which is fetched from database, XML files, Java collection, Comma-separated values or Models. Jasper can communicate with those data-sources and more, it can merge any number of data-sources together and manipulates the results of any combinations. This communication goes through JDBC, JNDI, XQuery, EJBQL, Hibernate or existing Oracle PL/SQL. You also can define your own data-source class and pass it to jasper engine directly. After defining your report design layout in jrxml format and determining your data source(s) jasper engine does the rest of work. It compiles your design file and fills it with results fetched from data-source and generates your report to the chosen exporting format (PDF, Excel, HTML, XML, RTF, TXT …, etc.)

Report Definition file structure (jrxml):

Jasper design file –jrxml- contains the following elements:
  • <jasperReport>: the root element.
  • <title>: its contents are printed only once at the beginning of the report
  • <pageHeader> - its contents are printed at the beginning of every page in the report.
  • <detail> - contains the body of the report, repeated by n number of results
  • <pageFooter> - its contents are printed at the bottom of every page in the report.
  • <band> - defines a report section, all of the above elements contain a band element as its only child element.
Only the root element is mandatory, the rest of elements are optional.

 

Environment

To set up working environment we need to download JasperReport jar file from the following URL: http://sourceforge.net/project/showfiles.php?group_id=36382&package_id=28579
And add the following jars to your project classpath:
  • jasperreports-2.0.4.jar
  • commons-digester-1.7.jar
  • commons-collections-2.1.jar (commons-collections.jar)
  • commons-logging-1.0.2.jar
  • commons-beanutils.jar
  • iText-2.0.7.jar (used infor PDF exporting)

Sample application

At this section we'll introduce a sample application that generates PDF, HTML and Excel files contain the results of our report which is built over Oracle database contains the following table:
ITEM
ITEM_ID ---- NUMBER(5) --- NOT NULL
CATEOGRY_ID ---- NUMBER(5) --- NOT NULL
ITEM_NAME ---- VARCHAR2(50) --- NOT NULL
ITEM_DESCIPTION ---- VARCHAR2(200)
ITEM_AMOUNT ---- NUMBER(5) ---- NOT NULL

Result: Report should retrieve the items with amount less than or equal 100 item.
We're going to divide the work into two steps:
  1. Generate the report design (jrxml file).
  2. Implement application that assigns data source, compiles jrxml file and exports result in the chosen format.

Source: http://java.dzone.com/articles/java-reporting-part-2.