It’s pretty well-known that the Oracle JDBC drivers pre-10g don’t support CLOBs that are larger than 4K. The 10g drivers do support large CLOBs (up to 4GB, I think) and even work with 9i, which is what we’re using. But there appear to be some creepy, made-for-Halloween bugs in there…
The particular table that is causing me grief has two CLOBs in it (which may be contributing to the problem). I’m using Hibernate to do my SQL, which is really great.
So, the first problem that I encountered was that the values that I was setting for the two CLOBs were getting reversed. The specific scenario: both CLOBs were fairly large. Hibernate does a setCharacterStream when you declare a field to be Hibernate type “text”. Only with large strings (the ones I used were greater than 4K), these fields were getting swapped on the way in to the database. getCharacterStream seemed to behave “properly”, meaning that the values were still swapped when we pulled them out.
My workaround for this was to set the Hibernate property type to “string” and the database column sql-type to “clob”. Having Hibernate do setString and getString on the PreparedStatement worked fine.
Until problem number two. I did an insert via a PreparedStatement with large (about 1700 characters) strings. That insert worked fine. However, if you reuse the PreparedStatement, as Hibernate wisely does, the next insert fails. What’s the error you see? “Cannot set (primary key column) to null”. Of course, no one was trying to set that column to null.
These are drivers I picked up from Oracle’s site just a couple weeks back, so I think they’re the latest. I guess it’s time to file a bug report.