MySQL Error: ER_TOO_BIG_ROWSIZE: Row size too large (> 8126)

I was recently scripting the creation of a MySQL database in Node JS. Most of my tables were created just fine, but one table errored again and again with the following error message:

code: 'ER_TOO_BIG_ROWSIZE',
errno: 1118,
sqlMessage: 'Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.',
sqlState: '42000',
index: 0,
sql: (super long sql command)

Admittedly, the table was very large having 374 column definitions. Still, I HAD to have this table as I was dealing with data that I had no control over.

I considered the possiblity of using MongoDb instead, but I was reluctant to go down the road of defining a large JSON document and delaying development of the app by having to pick and choose what data needed to be in the schema.

I also considered switching to PostgreSQL, but I did not know for sure that the switch would solve this issue, or if I’d just get a similar error message.

Luckily, many people have experienced this before me.

Based on several blogs and Stack Overflow responses, I identified two possible solutions

Solution 1: Configure InnoDB Storage Engine

I’m no DBA, so I don’t know the particulars about MySQL storage engines and how they work. Honestly, I struggled to find where the storage engine configurations are displayed in PHPMyAdmin. I did, however find the setting in the my.cnf configuration file in my XAMPP installation directory (\xampp\mysql\bin\my.cnf).

So, per instructionson Stack Overflow, i followed the following procedure:

First, edit my.cnf adding these lines of text

innodb_file_per_table=1
innodb_file_format = Barracuda

I added mine in the section of the file where I saw other innodb_* settings.

I restarted MySQL and was able to see the configuration changes in PHPMyAdmin > Engines > InnoDB

Then, in my table create script, I added the option ROW_FORMAT=DYNAMIC. I also tried the option ROW_FORMAT=COMPRESSED.

This solution, however did NOT work for me. I suspect that I simply had too many columns.

This article was published to theCodeNut.com. If you are reading this anywhere else, it was copied without permission.

Solution 2: Use a Different Storage Engine (MyISAM) for the Table

The second solution I tried, and the one that ultimately worked for me, was changing the storage engine of the table.

Since I had not yet created the table, this was very easy.

I simply added the following option to the end of my table create SQL command:

ENGINE=MyISAM

I was super excited to get the table created, but please note that using MyISAM rather than InnoDB comes with some compromises:

Table Locks When Writing Data

MyISAM unlike InnoDB, locks the entire table on writing data. So if you are building app where you’ll need frequent updates or inserts to your tables, MyISAM may significantly slow down your write times. This may be really detrimental if multiple users are trying to write data at the same time.

Luckily, my application is almost read-only. I’ll only need to update the data during down times in usage, and for me, faster reading is a higher priority.

You Give Up Transactions

If you need transactions to ensure data integrity, you may want to find another solution. MyISAM does not do transactions, so if your insert or update breaks midway through, you have no way of rolling back the improperly written records.

No Foreign Key Constraints

If your data is highly relational such that you need to enforce key columns across tables, you’re also out of luck with MyISAM.

Despite all of these limitations, using MyISAM storage engine for this one big table in my database was a necessary solution that does not pose a problem for my application. The data I am storing changes rarely, as in monthly intervals. And for me, read speed is more important than transactional data integrity. Honestly, if I do have issues updating data, I have the option to nuke the entire table and recreate/insert all of the data.

Let Me Know What You Think

Feel free to comment on this solution for the “Too Big Rowsize” error.

Should I have gone with MongoDb? Would PostgreSQL have solved this issue?

Thanks for reading!

Leave a Reply

Your email address will not be published. Required fields are marked *