Labels

Wednesday, 31 December 2014

[Windows 7 Install] Unable to copy files using windows 7 usb/dvd download tool

Sometimes if your USB stick has something in its MBR you might get the error
Windows 7 USB/DVD Download Tool error: We were unable to copy your files. Please check your USB device and the selected ISO file and try again.
You have to start command prompt as an Administrator (On Windows 7 that means right clicking the cmd and selecting Run as Administrator) and use the diskpart utility.


WARNING: Be careful to select the right drive or else your day won’t have a happy end because if you select the wrong drive you will lose all your data on this drive!


Instead of formatting the partition with FAT32, you can also use NTFS (like WUDT does), but then you need an extra step to make the drive bootable:
Bootsect.exe /nt60 X:
“X:” is the drive letter of your USB stick. Bootsect.exe can be found on the Windows 7 DVD in the boot folder. However, I can’t really recommend using NTFS. Some USB stick, at least, appeared to be slower with NTFS.
  1. Start command prompt as Administrator and type  diskpart 
  2. type  list disk
  3. type select disk  and number of your USB disk (Base on USB capacity like select disk 1 )
  4. type  clean
  5. type  create partition primary
  6. type  select partition 1
  7. type  active
  8. type  format quick fs=fat32
  9. type  assign
  10. type exit  to exit the diskpart utility
  11. type exit  to close command prompt





diskpart

Source: http://answers.microsoft.com/en-us/windows/forum/windows_7-windows_install/unable-to-copy-files-using-windows-7-usbdvd/bd21e76d-5174-4f76-8db5-36df105a12c5

Saturday, 13 December 2014

[ANDROID] The styles.xml file is not generated in my android project

ERROR: The styles.xml file is not generated in my android project.

SOLUTION:
I fixed mine by downgrading as seen at this other SO discussion using the version 23.0.5 (I'd post direct links, but I don't have enough points here to post more than two).
Again, I have no idea WHY this is happening, only that it prevents me from creating new projects with the latest SDK.

ACTION:
Anyway, for those who want to downgrade Android SDK Tools to a previous version, it can be possible following these steps:
  1. Find your Android SDK folder
  2. Locate the "tools" subfolder and rename it to "tools1" (just to keep a backup copy of the original tools folder)
  3. Download from google repository the SDK Tool version you want to downgrade to (for instance: http://dl-ssl.google.com/android/repository/tools_r22.6.2-macosx.zip) and unpack it.
  4. The ZIP file you downloaded contains a tools folder that has to be moved to your Android SDK folder.
NOTE
Now download the tools by using the following pattern:
http://dl-ssl.google.com/android/repository/tools_rXXX windows.zip 
http://dl-ssl.google.com/android/repository/tools_rXXX-linux.zip http://dl-ssl.google.com/android/repository/tools_rXXX-macosx.zip 

Where you must replace XXX with the exact revision number noted above. For example, to download revision 23.0.5 for Windows , download the file:
http://dl-ssl.google.com/android/repository/tools_r23.0.5-windows.zip

Source: http://stackoverflow.com/questions/9555337/how-to-downgrade-my-sdk-version


 

 

Friday, 12 December 2014

[SOLVED] Android Studio install problem, fails saying components not installed

ERROR:
The following SDK components were not installed: extra-android-m2repository, tools, addon-google_apis-google-21, android-21, sys-img-x86-addon-google_apis-google-21, source-21, extra-google-m2repository "

SOLUTION:

It seems there is a problem with the default installer of Android Studio 1.0.0, the one that contain both the IDE and di SDK Tools: the default installation path for the android sdk tools ends with myInstallPath../sdk/android-sdk but the first run setting for the Android Studio points at myInstallPath../sdk. So here is what i did.

  • Step 1:
  • Cut & Paste the content of the android-sdk folder outside the  sdk folder, so the files are under:
  •  myInstallPath../android-sdk
  •  myInstallPath../sdk/
  • Step 2:
  • Use SDK Manager from the new location to update everything (Use this instead of the automatic wizard, you can select which package you want or need and you get also the speed and estimate time for the download)
  • Step 3:
  • Run Android Studio (it should load and check that the SDK is up to date and start the creation of an AVD, after that the IDE will load completely)
UPDATE - Workaround for the "download interrupted: read timed out" problem
The firewall and the proxy prevented my SDK Manager to download some updates, so i've recovered the .xml url for these updates, searched for the .zip files i needed and directly downloaded them with the help of a download manager, then i've manually installed them in their relative folder under the sdk folder ..a bit tricky but worked for me. For example in the Addon_xml_file i've searched for the m2repository, found the entry and downloaded the archive with the link m2repository_r14_zip_file. You always find the files you need in the same base as the .xml file (take a look at the url i've posted for the example).

Source: http://stackoverflow.com/questions/27376465/android-studio-doesnt-start-fails-saying-components-not-installed

Thursday, 11 December 2014

[LINUX] Format USB Drive in the Terminal


1. Insert your USB drive into your system.

2. Open the terminal. (CTRL + ALT + T)

3. Look for the USB drive you want to format, by running:

df

The command above will display the directory path of your various drives. Take note of the drive you wish to format.


In this tutorial, the name of the drive am going to format is Seth and its path under the filesystem is /dev/sdc1.
NOTE: if df can not  list your usb drive you can use the following command: 
dmesk | tail

3. Unmount drive using the syntax below:
with root administrator:

umount /dev/sdc1



4. Now run this command to format drive to fat32:

mkfs.vfat -n 'Ubuntu' -I /dev/sdc1



Understanding the above command

mkfs
mkfs is used to build a Linux filesystem on a device, usually a hard disk partition. The device argument is either the device name (e.g. /dev/hda1, /dev/sdb2), or a regular file that shall contain the filesystem. The size argument is the number of blocks to be used for the filesystem.

vfat
Formats the drive to FAT32, other formats available are mkfs.bfs, mkfs.ext2, mkfs.ext3, mkfs.ext4, mkfs.minix, mkfs.msdos, mkfs.vfat, mkfs.xfs, mkfs.xiafs etc.

-n
Volume-name sets the volume name (label) of the file system. The volume name can be up to 11 characters long. The default is no label. In this tutorial my volume-name is Ubuntu.

-I
It is typical for fixed disk devices to be partitioned so by default, you are not permitted to create a filesystem across the entire device.

Running df after formatting displays this.



You are done and your pen drive has successfully been formatted.


Source: http://www.unixmen.com/how-to-format-usb-drive-in-the-terminal/

Sunday, 30 November 2014

[UBUNTU, CENTOS, SUSE, FEDORA ...] Automount HDD NTFS type in linux system

Note :

If your system can mount your hdd driver (not automount)  skip this step and go to Main Action

Prepare Action

While older ntfs drivers were prone to eat your data in r/w-mode, ntfs-3g seems to be r/w safe. See the ntfs-3g page for more information.
 

<!> As of CentOS 5.4 (kernel 2.6.18-164 or newer), the fuse kernel module is included in the kernel itself. Therefore, dkms and dkms-fuse are no longer required. If you have previously installed dkms-fuse, please uninstall it by a yum remove dkms-fuse command. Please note that CentOS-4 users need those 2 packages.
Make sure you have the rpmforge repo installed. If not, refer to Repositories.
Install the following packages.

yum install fuse fuse-ntfs-3g  (*)


If the rpmforge repo is disabled by default,

yum --enablerepo=rpmforge install fuse fuse-ntfs-3g (option)


<!> Note for CentOS-5 users: If you are still running CentOS 5.3 or older, then you would need to install kmod-fuse from ELRepo.
For CentOS-7 and CentOS-6 the EPEL repository is carrying later NTFS packages. EPEL is also usable for CentOS-5. To install, after enabling the repo per the Repositories page:

yum install ntfs-3g  (*)


or if you prefer to leave EPEL disabled by default

yum --enablerepo epel install ntfs-3g (option)


You may also want to

yum install ntfsprogs ntfsprogs-gnomevfs  (*)


for additional functionality. (Take, for example, ntfsclone to copy ntfs-partitions with or without empty space.)

 

Main Action 

Mounting an NTFS filesystem

Suppose your ntfs filesystem is /dev/sda1 and you are going to mount it on /mymnt/win, do the following.

First, create a mount point.

mkdir /mymnt/win


Next, edit /etc/fstab as follows. To mount read-only:

/dev/sda1 /mymnt/win ntfs-3g ro,umask=0222,defaults 0 0


Or to mount read-write:

/dev/sda1 /mymnt/win ntfs-3g rw,umask=0000,defaults 0 0


You can now mount it by running:

mount /mymnt/win

Thursday, 17 July 2014

Shoud we use EJB in our project

To me this thread shows exactly what is wrong with our profession. Too many people read the marketing literature and take it as gospel.

You use EJB when and only when you need the container services. That is, transaction, persistence and security. Now if you follow the advice of most and don't use Entity beans that leaves you with security and transaction. Very few people use the security features on EJB so that leaves you with transactions. There are many other often easier ways to deal with transactions though so it is still questionable if you ever need EJB.

Let us examine some of the comments made so far.

1. Scalability - Why is it more scalable? If you are using Stateless session beans you are scalable because you are stateless. But if you are stateless it is just as easy to use POJOs (Plain old Java Objects) . Stateless design leads to better scalability but EJBs are not necessary to make your application stateless. Again the only advantage is the container services provides - otherwise it is just code folks - and your code will work just fine.

While we are at it lets address another common misconception. Lifecycle management. It is often claimed that EJB gives you some necessary lifecycle management and object pooling. It also is supposed to help you in a multi threaded environment. The trouble with this claim is that if you are stateless there are no threading issues and you only need one instance to handle all of your clients on a server. Servlets and stateless Session beans are essentially equivalent ( keep in mind that HTTPServlets are only one type of Servlet). In the world of Servlets the spec allowed you to either create a new Servlet for every client or to use one and make it thread safe. Every servlet container does it the second way and yet EJB only allows containers to do it the first way.

2. Performance - ??? What are they doing to improve performance. I can either call an object directly or go through layers of infrastructure - which do you think would be faster. Again only if we need the infrastructure is it an advantage (and NONE of the EJB infrastructure is to help with performance).

3. Maintainability - If I use OO programming methodology instead of the procedural based EJB I will be better off not worse off here. Why do people think that only EJB will get you modularity? Also testability is more difficult with EJB then POJOs and by decreasing testability I decress maintainability.

4. Different clients - Granted you need a server to handle the different clients but you don't need EJB for them to all call the same code. You just need the same object to call. Objects themselves don't care about who your client is. If I have some object Foo with method bar and that method is such that I can have copies of it in different app servers that are all the same ( a requirement for clustering) then all I have to do is have each client create a Foo and call foo.bar(). Why is it better to do the JNDI lookup to find the Foo (when they are all the same) and then use an RMI layer to call the bar. Even if we are calling the database here it is the conection pooling and the database that gives you your advantages - and there were connection pools long before there were app servers and you can use them in an AppServer even if you are a POJO.

5. Database access - One poster said if you use your JSP/Servlets to access the database all the time it will be slower because you don't get help from the container. This is a false dichotomy. I can still have an object layer and a data access layer and never have my JSP/Servlets directly make SQL calls. I don't need EJB for that and unless you are using Entity beans which means you have an extremely simple domain model, the container is doing nothing for you here.

History is important here. EJB arose out of a desire to make CORBA easier to deal with. The reason you would use CORBA is as an integration technology. You had a system (maybe an old batch system) that you wanted to access from an OO or other program. With a CORBA interface you could make it more object like to the client. When doing this properly there were some cross cutting concerns that crop up like Transaction management, Security etc that people designed frameworks to get around. EJB was supposed to help here. The problem is that the context - a wrapper for older technology or across system boundaries was lost and now people advocate using EJB when it is all one system, you don't distribute, and it is a brand new app. But why? I love JMS for integration but I don't think you should endlessly send messages to yourself within an application.

Most projects should have at most one or two EJBs but I see people create systems with hundreds of them. This is stupid and wasteful. You have made your build times longer, your deployments more complex, and your code harder to test. What would be better for our industry is for people to actually learn about OO Design and stop with trying to shove procedural technology in the way and calling that progress.

source: http://www.theserverside.com/discussions/thread.tss?thread_id=30165

Monday, 14 April 2014

[MYSQL] #1045 error And reset password

iff you actually have set a root password and you've just lost/forgotten it:
  1. Stop MySQL: sudo /etc/init.d/mysql start or mysql start
  2. Restart it manually with the skip-grant-tables option:  mysqld_safe --skip-grant-tables
  3. Run the MySQL client: mysql -u root
  4. Reset the root password manually with this MySQL command: UPDATE mysql.user SET Password=PASSWORD('password') WHERE User='root';
  5. Flush the privileges with this MySQL command: FLUSH PRIVILEGES;
From http://www.tech-faq.com/reset-mysql-password.shtml
(Maybe this isn't what you need, Abs, but I figure it could be useful for people stumbling across this question in the future)

source:  http://stackoverflow.com/questions/489119/mysql-error-1045-access-denied

[TOMCAT] Unable to set localhost

# sudo vim /etc/hosts 
 If /etc/hosts doesn't containt the definition of the hostname it fails. Just add your hostname to /etc/host for example if your hostname is work add or modified the following line:

127.0.0.1   work        localhost
 
source: http://stackoverflow.com/questions/4969156/error-java-net-unknownhostexception 

[ALL SYSTEM] Setup JAVA_HOME

Login as root user

vim /etc/profile

Set JAVA_HOME as follows using syntax export JAVA_HOME=<path-to-jdk>

If your jdk path is set to /usr/java/jdk1.5.0_07, add this command to file:

export JAVA_HOME=/usr/java/jdk1.5.0_07

Set PATH as follows:

export PATH=$PATH:$JAVA_HOME/bin

 then save file as following step:

step 1: press esc key on keyboard

step 2: press : key  on keyboard

step 3: type command w

step 4: press enter key on keyboard

After saving successfully run following command to force the system load that file profile again.

. /etc/profile  

 

  Run command java -version to check if successful

[SUSE] start apache2, mysql


APACHE2:
linux-8wi2:~ # /etc/init.d/apache2 start
redirecting to systemctl start apache2.service
linux-8wi2:~ # 


MYSQL:
linux-8wi2:~ # /etc/init.d/mysql start
redirecting to systemctl start mysql.service
linux-8wi2:~ #

source:http://www.cyberciti.biz/faq/linux-start-apache/

Wednesday, 26 March 2014

[UBUNTU] Install LAMP

Install tasksel in ubuntu
In terminal:
sudo apt-get install tasksel

sudo tasksel



choose LAMP (use spacebar keyboard to choose)
and press Tab to choose OK and press Enter

During the installation you will be asked to insert the mysql root password
ok

Saturday, 8 March 2014

[UBUNTU] Setup JAVA_HOME

System-wide environment variables

/etc/profile.d/*.sh

Files with the .sh extension in the /etc/profile.d directory get executed whenever a bash login shell is entered (e.g. when logging in from the console or over ssh), as well as by the DisplayManager when the desktop session loads.
You can for instance create the file /etc/profile.d/myenvvars.sh and set variables like this:  
Suppose my jdk1.7.0 in folder /home/approved/Downloads

sudo -i
(type password)
cat >/etc/profile.d/myenvvars.sh
(and press)
export JAVA_HOME=/home/approved/Downloads/jdk1.7.0 
export PATH=$PATH:$JAVA_HOME/bin
(and press)
Ctrl+D

logout 
in terminal: 
java -version 

Friday, 21 February 2014

[HIBERNATE] property: hibernate.hbm2ddl.auto

hibernate.hbm2ddl.auto Automatically validates or exports schema DDL to the database when the SessionFactory is created. With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly.
e.g. validate | update | create | create-drop
So the list of possible options are,
  • validate: validate the schema, makes no changes to the database.
  • update: update the schema.
  • create: creates the schema, destroying previous data.
  • create-drop: drop the schema at the end of the session.
These options seem intended to be developers tools and not to facilitate any production level databases, you may want to have a look at the following question; Hibernate: hbm2ddl.auto=update in production?

Wednesday, 19 February 2014

[HIBERNATE] one-to-one mapping using annotations

Hibernate one-to-one mapping using annotations

If you are working on any hibernate project or you are planning to work on any in future, then you can easily understand the one-to-one relationships between several entities in your application. In this post, i will discuss variations of one-to-one mappings supported in hibernate.
Download source code
Sections in this post:
Various supported techniques
Using foreign key association
Using a common join table
Using shared primary key
For this article, I am extending the example written for hello world example. We have two entities here: Employee and Account.

Various supported techniques

In hibernate there are 3 ways to create one-to-one relationships between two entities. Either way you have to use @OneToOne annotation. First technique is most widely used and uses a foreign key column in one to table. Second technique uses a rather known solution of having a third table to store mapping between first two tables. Third technique is something new which uses a common primary key value in both the tables.
Lets see them in action one by one:

Using foreign key association

In this association, a foreign key column is created in owner entity. For example, if we make EmployeeEntity owner, then a extra column “ACCOUNT_ID” will be created in Employee table. This column will store the foreign key for Account table.
Table structure will be like this:
foreign key association one to one
To make such association, refer the account entity in EmployeeEntity class as follow:
1.@OneToOne
2.@JoinColumn(name="ACCOUNT_ID")
3.private AccountEntity account;
The join column is declared with the @JoinColumn annotation which looks like the @Column annotation. It has one more parameters named referencedColumnName. This parameter declares the column in the targeted entity that will be used to the join.
If no @JoinColumn is declared on the owner side, the defaults apply. A join column(s) will be created in the owner table and its name will be the concatenation of the name of the relationship in the owner side, _ (underscore), and the name of the primary key column(s) in the owned side.
In a bidirectional relationship, one of the sides (and only one) has to be the owner: the owner is responsible for the association column(s) update. To declare a side as not responsible for the relationship, the attribute mappedBy is used. mappedBy refers to the property name of the association on the owner side.
1.@OneToOne(mappedBy="account")
2.private EmployeeEntity employee;
Above “mappedBy” attribute declares that it is dependent on owner entity for mapping.
Lets test above mappings in running code:
 01.public class TestForeignKeyAssociation {  
 03.  public static void main(String[] args) {  
 04.    Session session = HibernateUtil.getSessionFactory().openSession();  
 05.    session.beginTransaction();  
 06.   
 07.    AccountEntity account = new AccountEntity();  
 08.    account.setAccountNumber("123-345-65454");  
 09.   
 10.    // Add new Employee object  
 11.    EmployeeEntity emp = new EmployeeEntity();  
 12.    emp.setEmail("demo-user@mail.com");  
 13.    emp.setFirstName("demo");  
 14.    emp.setLastName("user");  
 15.   
 16.    // Save Account  
 17.    session.saveOrUpdate(account);  
 18.    // Save Employee  
 19.    emp.setAccount(account);  
 20.    session.saveOrUpdate(emp);  
 21.   
 22.    session.getTransaction().commit();  
 23.    HibernateUtil.shutdown();  
 24.  }  
 25.}  
Running above code creates desired schema in database and run these SQL queries.
1.Hibernate: insert into ACCOUNT (ACC_NUMBER) values (?)
2.Hibernate: insert into Employee (ACCOUNT_ID, EMAIL, FIRST_NAME, LAST_NAME) values (?, ?, ?, ?)
You can verify the data and mappings in both tables when you run above program.. :-)

Using a common join table

This approach is not new to all of us. Lets start with targeted DB structure in this technique.
join table one to one mapping
In this technique, main annotation to be used is @JoinTable. This annotation is used to define the new table name (mandatory) and foreign keys from both of the tables. Lets see how it is used:
1.@OneToOne(cascade = CascadeType.ALL)
2.@JoinTable(name="EMPLOYEE_ACCCOUNT", joinColumns = @JoinColumn(name="EMPLOYEE_ID"),
3.inverseJoinColumns = @JoinColumn(name="ACCOUNT_ID"))
4.private AccountEntity account;
@JoinTable annotation is used in EmployeeEntity class. It declares that a new table EMPLOYEE_ACCOUNT will be created with two columns EMPLOYEE_ID (primary key of EMPLOYEE table) and ACCOUNT_ID (primary key of ACCOUNT table).
Testing above entities generates following SQL queries in log files:
1.Hibernate: insert into ACCOUNT (ACC_NUMBER) values (?)
2.Hibernate: insert into Employee (EMAIL, FIRST_NAME, LAST_NAME) values (?, ?, ?)
3.Hibernate: insert into EMPLOYEE_ACCCOUNT (ACCOUNT_ID, EMPLOYEE_ID) values (?, ?)

Using shared primary key

In this technique, hibernate will ensure that it will use a common primary key value in both the tables. This way primary key of EmployeeEntity can safely be assumed the primary key of AccountEntity also.
Table structure will be like this:
shared primary key one to one
In this approach, @PrimaryKeyJoinColumn is the main annotation to be used.Let see how to use it.
1.@OneToOne(cascade = CascadeType.ALL)
2.@PrimaryKeyJoinColumn
3.private AccountEntity account;
In AccountEntity side, it will remain dependent on owner entity for the mapping.
1.@OneToOne(mappedBy="account", cascade=CascadeType.ALL)
2.private EmployeeEntity employee;
Testing above entities generates following SQL queries in log files:
1.Hibernate: insert into ACCOUNT (ACC_NUMBER) values (?)
2.Hibernate: insert into Employee (ACCOUNT_ID, EMAIL, FIRST_NAME, LAST_NAME) values (?, ?, ?, ?)
So, we have seen all 3 types of one to one mappings supported in hibernate. I will suggest you to download the source code and play with it.
Happy Learning !!

Download source code



Source: http://howtodoinjava.com/2012/11/15/hibernate-one-to-one-mapping-using-annotations/

Tuesday, 18 February 2014

[HIBERNATE] Compound Primary Key in Hibernate

In this code how to generate a Java class for composite key (how to composite key in hibernate):
create table Time (
        levelStation int(15) not null,
        src varchar(100) not null,
        dst varchar(100) not null,
        distance int(15) not null,
        price int(15) not null,
        confPathID int(15) not null,
        constraint ConfPath_fk foreign key(confPathID) references ConfPath(confPathID),
        primary key (levelStation,ConfPathID)
)ENGINE=InnoDB  DEFAULT CHARSET=utf8 ;
To map a composite key, you can use the EmbeddedId or the IdClass annotations. I know this question is not strictly about JPA but the rules defined by the specification also applies. So here they are:

2.1.4 Primary Keys and Entity Identity

...
A composite primary key must correspond to either a single persistent field or property or to a set of such fields or properties as described below. A primary key class must be defined to represent a composite primary key. Composite primary keys typically arise when mapping from legacy databases when the database key is comprised of several columns. The EmbeddedId and IdClass annotations are used to denote composite primary keys. See sections 9.1.14 and 9.1.15.
...
The following rules apply for composite primary keys:
  • The primary key class must be public and must have a public no-arg constructor.
  • If property-based access is used, the properties of the primary key class must be public or protected.
  • The primary key class must be serializable.
  • The primary key class must define equals and hashCode methods. The semantics of value equality for these methods must be consistent with the database equality for the database types to which the key is mapped.
  • A composite primary key must either be represented and mapped as an embeddable class (see Section 9.1.14, “EmbeddedId Annotation”) or must be represented and mapped to multiple fields or properties of the entity class (see Section 9.1.15, “IdClass Annotation”).
  • If the composite primary key class is mapped to multiple fields or properties of the entity class, the names of primary key fields or properties in the primary key class and those of the entity class must correspond and their types must be the same.

With an IdClass

The class for the composite primary key could look like (could be a static inner class):
public class TimePK implements Serializable {
    protected Integer levelStation;
    protected Integer confPathID;

    public TimePK() {}

    public TimePK(Integer levelStation, String confPathID) {
        this.id = levelStation;
        this.name = confPathID;
    }
    // equals, hashCode
}
And the entity:
@Entity
@IdClass(TimePK.class)
class Time implements Serializable {
    @Id
    private Integer levelStation;
    @Id
    private Integer confPathID;

    private String src;
    private String dst;
    private Integer distance;
    private Integer price;

    // getters, setters
}
The IdClass annotation maps multiple fields to the table PK.

With EmbeddedId

The class for the composite primary key could look like (could be a static inner class):
@Embeddable
public class TimePK implements Serializable {
    protected Integer levelStation;
    protected Integer confPathID;

    public TimePK() {}

    public TimePK(Integer levelStation, String confPathID) {
        this.id = levelStation;
        this.name = confPathID;
    }
    // equals, hashCode
}
And the entity:
@Entity
class Time implements Serializable {
    @EmbeddedId
    private TimePK timePK;

    private String src;
    private String dst;
    private Integer distance;
    private Integer price;

    //...
}
The @EmbeddedId annotation maps a PK class to table PK.

Differences:

  • From the physical model point of view, there are no differences
  • EmbeddedId somehow communicates more clearly that the key is a composite key and IMO makes sense when the combined pk is either a meaningful entity itself or it reused in your code.
  • @IdClass is useful to specify that some combination of fields is unique but these do not have a special meaning.
They also affect the way you write queries (making them more or less verbose):
  • with IdClass
    select t.levelStation from Time t
  • with EmbeddedId
    select t.timePK.levelStation from Time t

References

  • JPA 1.0 specification
    • Section 2.1.4 "Primary Keys and Entity Identity"
    • Section 9.1.14 "EmbeddedId Annotation"
    • Section 9.1.15 "IdClass Annotation"

Hibernate is really great but it definitely needs some skills and experience: knowledge of ORM concepts, mappings and relations are welcome, understanding of how the Session works, understanding of more advanced concepts like lazy-loading, fetching strategies, caching (first-level cache, second-level cache, query cache), etc.
Sure, when used correctly, Hibernate is an awesome weapon: it's efficient, it will generate better SQL queries than lots (most?) of developers, it's very powerful, it has great performances, etc. However, I've seen many project using it very badly (e.g. not using associations because they were scared to "fetch the whole database" and this is far from the worst horror) and I'm thus always a bit suspicious when I hear "we are doing Hibernate". Actually, in the wrong hands, it can be a real disaster.
So, if I had to mention one weakness, it would be the learning curve. Don't underestimate it.

Source: http://stackoverflow.com/questions/1607819/weaknesses-of-hibernate/1609631#1609631

Thursday, 6 February 2014