Keep this in mind

Why in-memory computing is the future of computing

Nikita Ivanov
in-memory computing
© Shutterstock / Denis Dryashkin

How can you ensure that your applications will be fast, scalable, and have enough processing power? Instead of constantly upgrading your hardware, Nikita Ivanov has a different idea: in-memory computing.

Need to ensure a great end user experience for your data-intensive applications? Planning a new digital transformation initiative? Then your applications will require speed, scalability and high availability to process transactions and perform analyses in real time, even as the amount of data in your system grows dramatically.

How do many companies ensure speed and scale today? Usually with increasingly expensive hardware. In addition to busting budgets and constraining growth, this strategy has one major flaw. Increasing hardware expenditures can’t eliminate the slowest part of any application infrastructure: hard disk reads and writes. When an application reads and writes data from a disk-based database for processing or analysis, significant latency is introduced, even when using the latest storage technologies such as solid-state drives (SSDs).

SEE MORE: Eclipse science and open source software for quantum computing

In-memory computing

In-memory computing (IMC) can eliminate that latency. In-memory computing platforms use large pools of RAM to process and analyze data without the need to continually read and write data located on a disk-based database. An IMC platform can easily be inserted between existing application and database layers with no rip-and-replace. It can leverage distributed, JVM-based architectures for parallel processing, delivering performance gains of 1,000x or more.

An in-memory computing platform that utilizes commodity servers can easily be scaled out at any time by adding nodes to the cluster, allowing a business to cost-effectively scale its infrastructure as needed. In addition, distributed architectures can provide high availability and simplified maintenance with data replicated across the cluster nodes. IMC platforms typically offer the following:

  • An in-memory data grid to cache data and accelerate and scale out applications running on RDBMS, NoSQL, or Hadoop databases, with some solutions even supporting ANSI-99 SQL and ACID transactions
  • An in-memory database which serves as the system of record while providing full relational database functionality
  • A streaming analytics engine for analyzing and responding to incoming data in real-time

One advantage of an in-memory computing platform is ease of implementation. The platform can be inserted between an existing application and data layer with minimal coding, providing fast time to value and an extremely high performant architecture. The redundancy offered by the computing cluster also provides a straightforward path to high availability.

IMC platforms can also enable Hybrid Transactional/Analytical Processing (HTAP), which enables real-time analytics and transactions on the operational data set. Using a single database instead of separate databases for transactions and analytics reduces total cost of ownership. HTAP can also dramatically simplify life for a development team, which will require expertise in just one technology stack instead of two.

SEE MORE:  IoT memory: An overview of the options

In-memory computing is not new. Until recently, though, the cost of RAM meant it was economically feasible only for very high value applications. Fortunately, the cost of RAM has dropped steadily over the years. Today, enterprises of all sizes and across many industries are recognizing that the additional cost of an in-memory computing platform is easily justified by the improved end user experience and application performance realized. Gartner believes that the in-memory technology market will grow at a compound 22% per year to reach $13 billion by the end of 2020.

The maturation and availability of in-memory computing platforms is a boon for any company with data-intensive applications that require extreme speed and scale. Now is the time to begin exploring their capabilities.

Author

Nikita Ivanov

Nikita Ivanov is founder and CTO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital.

Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems.

He is an active member of Java middleware community, contributor to the Java specification, and holds a Master’s degree in Electro Mechanics from Baltic State Technical University, Saint Petersburg, Russia.

Read Nikita Ivanov’s personal blog on WordPress.


Comments
comments powered by Disqus