nuclio is at the center of a new open source serverless platform
nuclio, a new open source serverless platform, intends on bringing all the ease and abstraction of a serverless platform without the infrastructure hassle.
Serverless has something of a bad reputation. All the pluses of the format seem to come at the cost of speed and flexibility. Thanks to this issue, the folks at iguazio decided to develop a new open source serverless platform. Based on their elastic data-life cycle management service for high-performance events and data processing, nuclio was born.
Why another serverless project? nuclio fills a need within the field. Specifically, nuclio is designed for:
- Delivering real-time performance and maximum parallelism
- Enabling simple debugging, regression and a multi-versioned CI/CD pipeline
- Supporting pluggable data/event sources with common APIs
- Portable across low-power devices, laptops, on-prem and public cloud
All of this abstraction of data resources supports code portability, simplicity, and data-path acceleration.
Written in Go, nuclio’s core component is the function processor. However, nuclio is still under development, but it already supports Golang and Python. Future features include support for Java and Node.js. However, since it’s still a work in progress, volunteers are welcome to join in and help shape the future of this open source serverless platform.
Need for speed
Specifically, nuclio is built for speed over all things. According to their specs, a single function instance can process hundreds of thousands of HTTP requests or data records per second. This is roughly 10 to 100 times faster than other frameworks.
Here’s how they do it.
nuclio’s core architecture is a function processor. This is the “OS” and provides nuclio access to all events, data, logs, etc. This function code can be plugged into a number of event sources, including HTTP, Kinesis, Kafka, RabbitMO, MOTT, NATS, V3IO, and other emulators.
External data like objects, files, databases, and streams can be accessed through the data binding interface. This takes care of all the data connections, security, and caching aspects. Then, you can write a function that uses local files or access remote data over HTTP without ever changing the code.
One of the main goals of the project was to create a real-time processor. Running functions in the nuclio processor ecosystem should be much faster than writing your own. nuclio claims to run 400,000 function invocations per second and respond in 0.1ms latency. That would make it a hundred times faster than most serverless/FaaS solutions.
How? Access to the Go-based processor and other languages is done through low-latency shared memory access. This eliminates context switches as well as the process start overhead.
If you’re interested in making the switch, information on nuclio can be found here as well as on GitHub. You can even test-drive nuclio with the all-in-one Docker version. This comes with a few built-in examples to help explain how to write functions, use logs and add package dependencies through inline comments or defined events.