- Finalize liquibase component with 100% code coverage (an extender may come in the future)
- Upload a new patched commons-dbcp to our maven repo (we already use a patched version but a new bug came out with liquibase that is already solved in their jira)
- Finish some QueryDSL-Liquibase based components like sequence
- Migrate many of our components from Blueprint-JPA to DS-Liquibase-QueryDSL (this is necessary to be able to release 1.0.0)
- Create a sample for our project to have web interface. I would like to have a similar structure with REST services as the enRoute project has (We have used JSF since 5 years and it is time to change to a more effective solution)
- Contribute to maven-bundle-plugin: Add m2e-lifecycle to generate DS and Metatype files to the target/classes directory
- Migrate all of our components from felix-scr-annotations to org.osgi based annotations
- Contribute to enRoute: Creating a QueryDSL-Liquibase based branch that can be used instead of JPA
- Adding an m2e lifecycle to our maven plugin to update the running OSGi container as soon as any dependency changes in the project
- See how our components can be distributed to the maven-central-repository
I heard recently that BoneCP is the connection pool that developers will use in the future. I started to check why this connection pool is so popular. Well, they say that they have very good benchmark results. The first thing I asked: Does BoneCP have a JTA-aware pool as well? I found an answer at StackOverFlow:
If timelines permit and there’s this requirement, I can add JTA support to BoneCP if you want.
Wallace (BoneCP author)
Cool, there is a hope. However, I found another answer written by Wallace at a BoneCP forum thread:
1. The pool does not deal with transactions at all, that’s not the job of the *connection* pool. It’s only job is to give you a connection when you ask for it.
It seems that Wallace has changed his mind. I disagree with him. It is not the job of an object pool to be transaction-aware but if we talk about connection pooling, transaction handling must be implemented. In my opinion, a transaction-aware connection pool must handle connections in the following way:
- If there is a live transaction, newly provided connections should be enlisted
- Connections should be enlisted, until the transaction ends
- If a connection is enlisted, the connection should not be provided to another client that does not participate in the same transaction
- If a client needs a new connection, available connections already enlisted within the same transaction should be preferred
- The pool should wrap the provided PreparedStatement objects and keep them opened, until the connection is physically closed
There might be other rules. Based on the ones I collected, I do not think there is a way to separate connection pooling logic from transaction enlisting.
I know about two vendor-independent connection pools that support JTA: commons-dbcp and XaPool. It seems that the second is not under development anymore.
In our projects, we use commons-dbcp. It has several annoying bugs, so we had to deploy a patched version into our maven repository. These patches are available in the Jira of commons-dbcp. However, I got really sad as I saw that nobody took care of the issues since years, although there were useful contributions. I started to search in the mailing lists to find out if the project is dead.
Version 2 of Apache Commons Pool contains a completely re-written pooling implementation compared to the 1.x series. In addition to significant performance and scalability improvements – particularly under highly concurrent loads
There is no hard schedule. My guess, at least one issue is being addressed for [pool] 2.0.1 maintenance release, after that attention should shift to [dbcp].
- Use vendor dependent connection pool
- Use connection pool provided by the TM (e.g. bitronix has a connection pool)
- Use a patched version of commons-dbcp with JTA
- Use XAPool (it seems, that the project is dead)
- Use BoneCP, C3P0, … without JTA
- Use BoneCP, C3P0, … with JTA and allow 10 times more database connections than threads (as each getConnection() will hold a connection until the transaction ends)
It is funny, that in 2013 we cannot choose a JTA aware, vendor-independent, effective connection pool that “just works”. However, there is a hope, that commons-dbcp 2.0 will do the job.
Since a while, I have decided to leave JPA and use QueryDSL instead. JPA2 was great to go with, but after a while the life-cycles of Entity objects took more than they gave. It was only the Criteria API in JPA that we liked.
With OSGi it is much easier to develop reusable modules and with the help of type-safe SQL queries (like JPA Criteria API and QueryDSL) this re-usability can be extended onto the persistence level.
Check the Localization example. With Localization you can have an independent table in a separate modul. Based on the table you can have the same features as the ResourceBundle does in Java. However, it is also possible to create Util function that can return a SubQuery with the logic (based on the function COALESCE). You can embed the Localization logic into the selection part of query without knowing about the difficulty of the subquery. On the end, you will be able to query only a range of an ordered resultSet from a database where the ordering is done based on the localized column.
Question: How can we create the DDL scripts and when should we run them?
I have checked many projects and it seems that Liquibase is the tool what I need.
Liquibase in short: I can create an mostly database independent XML. The XML can contain schema definition but also uprade scripts (in case the current db uses an older version). Liquibase can run on existing database, validate it, or give advices what SQL should be applied on a production database by the DB Admin.
In case I have a module like Localization, it seems obvious to take the liquibase changelog XML into the META-INF folder of the bundle. What should pick it up and when?
I have a DS component in the bundle that registers the Localization service. Somehow the database should be validated/populated by Liquibase before my service is registered. I can do the database check when the DataSource reference is available in the component.
I have the following options, but I do not like any of them:
- I generate the SQL scripts at development time for every supported database systems and put the native SQL scripts into the bundle, too. In that case I could give the SQL scripts to the DB Admins. However, this solution is not really good as Liquibase can handle version upgrades as well. In this case it is not really possible to do anything at runtime (even validate).
- Hardcode the databae initiatization part of the component into the activate method by calling Liquibase code. What should be done than? Trying to update the database and if it is not necessary, saving the native SQL script somewhere to the file system or simply log it out with LogService?
- Implementing a solution that can fit into the DS component registration based on service properties transparently. I am not sure it is even possible. What if my solution starts later than DS?
- Creating Service Hooks… I will not do that for sure 🙂
This is a difficult question. I think at least the schema validation should be done during the Component activation when the DataSource is ready. Without writing Liquibase based code into the Activate function, I do not see if it is possible at all.
Last week I attended to EclipseCon Europe that was located at Ludwigsburg. Although there were many interesting talks I was more interested in the speeches of OSGiCon that was co-located with EclipseCon. OSGi is a specification that snaked into the Enterprise Java world in the last years and due to it’s perfection nobody will be able to kick it out.
In the last two years I forced my colleagues to switch the projects to OSGi. I had a painful period of time as I was the one who my colleagues could point at when something did not work. I started to create simple set of tools to help their work in a way that they could use the technologies they used before. As a result I presented these tools in a speech.
I visited several forums recently and I saw that some guys really prefer Declarative Services to Blueprint. As I worked a lot with Spring I chose blueprint after switching to OSGi. However, we ran into many problems with Blueprint and many of our projects could not reach version 1.0.0 as we missed a couple of features from Blueprint. One of my goal was to catch Peter Kriens for a little chat and ask him about the use-cases we have. Well, he convinced me to give a shot to DS. I will try it out on one of our next projects.
Peter started to create a new project called enRoute. As much as I understand his goal is to have a sample code that shows, how to use OSGi in the enterprise world. It has every “layer” an enterprise application has to have (persistence, business logic and web UI). I am really happy that a project like this exists as in the future I could tell my new colleagues to check that to get the basic knowledge. Concerning to the persistence layer, I found that he used a TransactionServletFilter. I saw solutions like this before and I had a really bad feeling about them. I tried to explain this to Peter. I am not sure what his impression was (probably that I am a bit aggressive when I want to let others know about my opinion :)) but my one was that it would be really necessary to collect all the use-cases of persistence handling. Therefore my next task will be to write down my thoughts in a very clear way so it can be the base of a discussion.
Another impression I got at the conference that many people prefer Gradle (and ANT) to Maven. I must say I was a bit surprised. My company switched to Maven from ANT as soon as it was possible. The early versions of m2eclispe plugin were really buggy and we suffered a lot. However, it was still worthy as with the dependency management of Maven we saved lots of time. We use Maven since that and we are pretty satisfied. I really liked the strict rules of Maven so I knew that my colleagues could not create shitty build file. As we work with Maven, a big part of my speech was about a maven plugin that helped a lot for us to work with OSGi.
After the speech I got the following negative feedback on EclipseCon site: “Got the impossion that they missed what’s happened in the past few years, looked really old school”. This comment made me think a lot. What did I miss? I looked for solutions on the internet and I could not find any solution that makes it possible that could at least partly support TDD with Maven and Eclipse. I know about BndTools, but it supports ANT, which we left years ago. I know about Gradle, but it is still more the future. The Eclipse Gradle plugin lacks many feature the M2E plugin has. I would be really happy to get more information what I missed in the past few years. In the answer I would like to see at least one “modern” alternative solution that supports the development workflow I introduced. Not to be mistaken, I am not huffish, I am interested. I would be the happiest if I could tell my colleagues that there is a solution better than the current.
I had colleagues in the past year who said “we should use Gradle/Scala/Netbeans/Idea/… as they are much more modern as the technologies we work with”.. After I asked, none of them could show me a complete solution with those technologies. I was flexible, I told them that they could make changes in our workflow in a way that they provide the same code quality but somehow they did not come back with a showcase with the “modern technologies” :).
I also attended to a very interesting discussion (OSGi en Route: What Is It?). It was started by Peter Kriens. He talked about his new project, called en Route. However, it became soon a discussion about OSGi itself. Why is it hard to learn? How could be the marketing of OSGi better? People already had a couple of beers and wines before the meeting so some of the speeches were a bit funny :). Although I am pretty sure I will follow the “en Route” project, the more interesting part for me here was that programmers could get in touch with the OSGi leaders in a very informal way. I told them that their website is not the best. Information is hidden and people close it before finding what they were looking for. I checked the site now again and I found that there are much more information now than a couple of years ago. I could find links to the tutorials. BUT. The “Getting started” is still not the most important message of the site. If I look at it, I see that there is conference. Cool. But if I look at the site of any other technologies, the first thing I get is the “Get started for dummies”. I think, OSGi guys should refactor the site a bit in the way that news are only the second most important thing on the site.
I had also a nice chat with David Bosschaert. He said that they are working on an OSGi certification program where not a framework but a programmer can get a paper. I am very interested where it goes and I would like to be one of the firsts who gets this certificate.
I think I got what I wanted at the conference. However, I am not sure I would have this feeling if I had not attended to the BoF. More BoF (informal talk with the technology leaders filled with alcohol) next year!