#TIL SQL Server 2014 Has In-Memory Tables

SQL Server 2014 has a new feature called “In-Memory OLTP” – it lets you declare a table as “memory optimized”, which loads and interacts with that table in memory as opposed to disk. This, along with other related improvements, can get you anywhere from a 5x to 20x performance gain compared to traditional tables.

As you can imagine working with data in memory compared to data stored on disk is dramatically faster, and that appears to be one of the main benefits of an in-memory table. Being able to read and edit data in system memory avoids any of the latency associated with disk access and writing, and you can also reduce reliance on table and row locks since the transactions will complete significantly faster.

Accessing the data isn’t any different from traditional tables – a simple sql query will work on both in-memory and disk tables, and you can even join between the two. However, the recommended method of data access is another new feature, natively compiled stored procedures. Just like the difference between a compiled and interpreted language, natively compiling stored procedures and tables removes the time associated with query interpretation and the compiled code can be reused for each request. The compilation happens when stored procedures are CREATEd or ALTERed, and also any time the database server restarts (which may add some overhead to restart times).

What happens to an in-memory database when the server is turned off? You can associate your in-memory table with a disk copy – SQL Server will write to this disk copy asynchronously after each transaction is committed, and only reads the data from disk again in the case of a server restart. Alternatively, you can specify an in-memory table to not have a backing store, which removes any need for disk I/O but all data will be lost in the case of a server restart or failover.

There’s a few key scenarios in which in-memory tables can net you a huge performance gain.

  • Applications that use stored procedures (compared to ad-hoc / generated queries)
  • Apps that make calls to large stored procedures that contain complex functionality, compared to lots of calls to quick sprocs
  • High data insertion rates, such as a logger or smart meter
  • Large volume read operations on readonly data – instead of having to read data from disk every time, load it in memory. Think product catalogs, social networks, leaderboards, recommendations
  • Session state management – where you may need a shared source of volatile data, like a user’s session information. We don’t necessarily care if it’s persisted to disk, and we can access that resource from anywhere in our web farm.

Check out the docs for In-Memory OLTP to learn more and see how to implement compiled tables and sprocs.

Comments