Flogo enables developers to build microservices or functions with a browser-based flow designer
Project Flogo is a Flow-based process engine written in Go. As the project continues to mature, it has expanded to simplifying the notion of event-driven apps by providing multiple action implementations for various event processing techniques. We talked to Matt Ellis, Director of Product Manager and Head of Open Source at TIBCO about project Flogo, its future, why developers should use it and more.
JAXenter: Could you tell us more about Project Flogo? What’s it all about?
Matt Ellis: Flogo is an open source ecosystem of event-driven capabilities to simplify building efficient & modern serverless functions and edge microservices. The idea of Flogo came out of a Friday afternoon conversation; the idea was simple: Let’s leverage our expertise in application integration by building our process engine in a lightweight, statically compiled language that is better suited for a new breed of compute — edge and cloud native.
Golang was the ideal language for tackling such an effort, as it is statically compiled, produces a single binary, and does not require any additional OS dependencies, hence making it more secure and reliable. In fact, the first Flogo prototype was built over that same weekend.
Now, the naming of the project — Flogo, was equally as simple, it’s a Flow-based process engine written in Go, hence Flogo. As Flogo continues to mature, it has expanded to simplifying the notion of event-driven apps by providing multiple action implementations for various event processing techniques – streaming, contextual rules, and app integration.
JAXenter: How does Flogo work? Why should developers use it?
Matt Ellis: Flogo enables developers to build microservices or functions with a browser-based flow designer. The flows can then be deployed to any infrastructure: on-premises, at the edge on devices, and on serverless platforms, such as AWS Lambda; all without any code changes to the application. Flogo exposes several different event-driven processing paradigms focused around stream processing, contextual rule inferencing, and application integration.
With over 500 community contributions via activities (the unit of work that can be chained together to build applications) and triggers (event consumers – Kafka consumer, MQTT, etc) developers can build their applications using our Go API and take advantage of all contributions or using our JSON-based DSL with a visual web development environment.
JAXenter: What does the future hold for Flogo?
Matt Ellis: It’s important to note that Project Flogo was founded as an open source solution and will continue to be maintained, matured and evolve in the open. The Flogo community is continuing to evolve the project to support additional cloud native requirements – configuration management, open tracing and monitoring, etc. The project will also continue to mature in the areas of machine learning geared toward the app development, as well.
The current roadmap for the open source project is available on GitHub.
Developers can build their applications using our Go API and take advantage of all contributions or using our JSON-based DSL with a visual web development environment.
What are the challenges of using open source in an enterprise setting?
Matt Ellis: Typically, employers are challenged when striking a balance between allowing employees to use the tools of their choice while also maintaining proper management and visibility across all tools that are used within the software development lifecycle.
Support for large scale open source projects is also a challenge several enterprises are dealing with today. The nature of open source software is collaborative, leaving enterprises with spare bandwidth to tweak the software and make solutions versatile.
With that said, the enterprise trend is definitely moving toward an “open source first” policy when building new solutions. This is something I’m especially excited about!
JAXenter: Could you also name some benefits?
Matt Ellis: There are various benefits of using open source in an enterprise setting both for the employees and the employer. Benefits for an employer range from recognition (depending on the size of contribution and frequency, etc.) to other benefits that go well beyond that.
For example, the employer can expand the number of developers working to solve a specific problem as pull requests (PRs) are iterated over by repo maintainers and other contributors. The employer will benefit from the most practical and optimized solution for the specific project. Employers can also gain expertise from other community developers and contributors, which can help radically shape a project and move it in the right direction.
If an employer builds and maintains open source projects, the benefits are slightly different and perhaps more abundant. The rate in which the project can grow is potentially obvious, however, when projects are developed in the open, they are more transparent and users of these projects are more interested in becoming involved and a real fan base can be born from open source projects. At TIBCO, we’ve seen with many other prominent open source projects, this can lead towards the opportunity to provide commercial wrappers around the open source solutions and thus monetize the project while maintaining a proper open source project.
For employees, open source contribution allows them to build their future by learning new development practices, languages and techniques. When an employee contributes to an open source project, their inner circle of developers can expand on a global level, thus leading to massive personal growth opportunities.
JAXenter: Lately, you’ve focused on machine learning and serverless compute. What technical achievements have you brought to the Golang open source community in both these areas?
Matt Ellis: Project Flogo was the first to bring native AWS Lambda function support to an application integration framework. Likewise, Project Flogo was at the forefront of providing edge machine learning (inferencing) capabilities directly on device in a developer-first fashion. We’ll continue to push the envelope, as well look at providing a more robust set of tools around serverless application development, deployment, as well as the next iteration of serverless via WASM, Likewise, with machine learning, we’ll continue marching forward with our vision of democratizing this technology for the application developer.
JAXenter: There are so many serverless-related technologies and tools out there; the latest one being AWS Firecracker announced at re:Invent. Where is serverless headed?
Matt Ellis: To be honest, I feel that many enterprises are just beginning to get their heads around containerization and what that means both technically, as well as from an infrastructure and cost perspective. The adoption of serverless within the enterprise will continue to grow, and I suspect it will become the defacto standard when looking at a deployment model for a number of applications, assuming the economical model is in line with the function load, etc.
JAXenter: Is serverless a “revolution of the cloud,” as Maciej Winnicki, Principal Software Engineer at Serverless Inc. told us last year? What’s your take on this statement?
Matt Ellis: Serverless deployment is liberating in so many ways for the application developer and even the operation teams. Serverless is the next iteration of cloud compute and will become the defacto choice when looking to deploy net-new, mission-critical applications (assuming the economic model is in line with expectations against the desired load, etc).
Serverless via AWS Lambda definitely changes the game!