With a small dataset (3000 records, 1.5MB data length) with expected growth of ~1000 entries per year, does it matter?
Let's see. That is 0.5kB or 512 Bytes per record. That is not that much. Or do you mean that each row has 1.5MB?
And I assume that by "binary fields" you mean BLOBs (Binary Large OBjects). Big fields.
Anyhow, increasing complexity out of fear will bring you nowhere. If there are problems, there are a lot of things you can do. Most of them do not involve the table structure. Why? because if anything is slow, it is probably searching through these fields or the network traffic that is generated by getting that data to your application.
Searching though the fields can be targeted with proper indexing or even extra checksum fields if that is necessary. You can imagine that it does not matter in how many tables you must search, except that more tables add more complexity.
The network load can be reduced by caching or compression (if the database is far away, not very helpful on a local network). If your database is running on the same machine as the application, a socket is probably used anyway (lookup the difference between 127.0.0.1 and localhost in the MySQL manual).
In short, premature optimization will usually bite you, especially if you don't know what you are optimizing
for.
+++ Despite being wrong in every important aspect, that is a very good analogy +++
Hex (in Darwin's Watch)