I just got bit by using writeUTF instead of writeUTFBytes, I knew they worked differently, but it took me a couple google searches to figure out why.
writeUTF appends a 16 bit length, and then your actual data writeUTFBytes simply appends your data
So why does writeUTF work like that?
When you read data back in, readUTF can get how long the string to read is, and then read exactly that much in. On the other hand, readUTFBytes just reads everything back. Seems like a minor difference, but imagine these two scenarios.
This works perfectly:
myByteArray.writeUTF("Hello World"); myByteArray.writeFloat(100); ... a = myByteArray.readUTF(); b = myByteArray.readFloat();
This next example does not work because the readUTFBytes also reads in that float we wrote, so the readFloat call has nothing to retrieve:
myByteArray.writeUTFBytes("Hello World"); myByteArray.writeFloat(100); ... a = myByteArray.readUTFBytes(myByteArray.length); b = myByteArray.readFloat();
On the other hand, the writeUTF method can only handle strings up to 64k float. So if you have an arbitrarily sized string it'll eventually failed. This is what I ran into.
So here's a couple simple rules for choosing which to use:
1) If you need to write arbitrarily sized large strings, use writeUTFBytes, but don't write anything after it. 2) If you need to write multiple items to your byte array and read them back that way, use writeUTF, but don't exceed 64k strings 3) If you need to create a byte array with no "header" of that leading length, use writeUTFBytes (like if you're making a stand-alone file to inter operate with some other system)