days
0
-24
-2
hours
-1
-8
minutes
-1
-9
seconds
-1
0
search
And now for something completely Concurnas

Introducing new JVM language Concurnas

Jason Tatton
concurnas
© Shutterstock / Dmitriy Rybin

It’s not every day that a new JVM language is born unto the world, so to celebrate the arrival of Concurnas here’s a complete introduction to the programming language by creator Jason Tatton. Concurnas has modern syntax and features, is open source and has GPU computing built in, which opens up the possibility for machine learning applications. Let’s see what Concurnas can do!

What is Concurnas and what sets it apart?

Concurnas is a new general purpose open source JVM programming language designed for building concurrent, distributed and parallel systems. Concurnas is easy to learn; it offers incredible performance as well as many features for building modern, enterprise scale computer software. What distinguishes Concurnas from existing programming languages is that it presents a unique, simplified means of performing concurrent, distributed and parallel computation. These forms of computation are some of the most challenging in modern software engineering, but with Concurnas they are made easy.

Utilizing Concurnas to build software enables developers to easily and reliably realize the full computing power offered by today’s multi-core computers, allowing them to write better software and be more productive. In this article we’re going to have a look at some of the key features of Concurnas that make it unique by building the key components of a trading application for use in a finance company.

The major goals of Concurnas

Concurnas has been created with five major goals in mind:

  • To offer the syntax of a dynamically typed language with the type safety and performance of a strongly typed compiled language. With optional types and an optional degree of conciseness with compile time error checking.
  • To make concurrent programming easier, by presenting a programming model which is more intuitive to non-software engineers than the traditional thread and lock model.
  • To allow both researchers and practitioners alike to be productive such that an idea can be taken from idealization all the way through to production using the same language and the same code.
  • Incorporate and support modern trends in software engineering including null safety, traits, pattern matching and first class citizen support for dependency injection, distributed computing and GPU computing.
  • To facilitate future programming language development by supporting the implementation of Domain Specific Languages and by enabling other languages to embedded within Concurnas code.

Introduction to Concurnas

Basic syntax

Let us first start off with some basic syntax. Concurnas is a type inferred language, with optional types:

myInt = 99
myDouble double = 99.9 //here we choose to be explicit about the type of myDouble
myString = "hello " + " world!"//inferred as a String

val cannotReassign = 3.2f
cannotReassign = 7.6 //not ok, compilation error

anArray = [1 2 3 4 5 6 7 8 9 10]
aList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

aMatrix = [1 2 3 ; 4 5 6 ; 7 8 9]

Importing code

By virtue of the fact that Concurnas runs upon the JVM and is Java compatible we are afforded access to the existing large pool of libraries available for Java, the JDK and of course any software which the enterprise in which we are operating has already created in any JVM language (such as Scala, Kotlin etc). We can import code via familiar mechanisms:

from java.util import List
import java.util.Set

Functions

Let us now introduce functions. Concurnas is an optionally concise language, meaning that the same function may be implemented to a differing degree of verbosity to suit the target audience reading the code. As such the following three implementations are functionally identical:

def plus(a int, b int) int{//the most verbose form
	return a + b
}

def plus(a int, b int) {//return type inferred as int
	a + b//implicit return
}

def plus(a int, b int) => a + b
//=> may be used where the function body consists of one line of code

Here is a simple function we will use later on in this article:

def print(fmtString String, args Object...){//args is a vararg
	System.out.println(String.format(fmtString, args))
}

Parameters to functions in Concurnas may be declared as vararg parameters, that is to say a variable number of arguments may be passed to them. Hence the following invocations of our print function are both perfectly valid:

print("hello world!") //prints: hello world!
print("hello world! %s %s %s", 1, 2, 3) //prints: hello world! 1 2 3

Concurrency model

Where Concurnas really stands out is in its Concurrency model. Concurnas does not expose threads to the programmer, rather, it has thread-like ‘isolates’ which are isolated units of code that, at runtime, are executed concurrently by being multiplexed onto the underlying hardware of the machine(s) upon which Concurnas is running. When creating isolates, we are constrained only by the amount of memory the machine we are operating upon has. We can create an isolate by appending a block of code or function invocation with the bang operator: !:

m1 String: = {"hello "}!//isolate with explicit returned type String:
m2 = {"world!"}!//spawned isolate with implicit returned type String:
msg = m1 + m2
print(msg) //outputs: hello world!

Above, msg will only be calculated when the isolates creating m1 and m2 have finished concurrent execution and have written their resulting values to their respective variables. Isolates do not permit sharing of state between each other than through special types called ‘refs’. A ref is simply a normal type appended with a colon: :. For instance, above we have seen the spawned isolates returning values of type String:. Refs may be updated concurrently by different refs on a non-deterministic basis.

SEE ALSO: How To Securely Program in Java in 2020

Refs possess a special feature in that they can be watched for changes, we can then write code to react to those changes, this is achieved in Concurnas via the onchange and every statements. onchange and every statements may return values, these values are themselves refs since onchange and every statements operate within their own dedicated isolates:

a int: = 10
b int: = 10
//^two refs

oc1 := onchange(a, b){
	plus(a, b)
}

ev1 := every(a, b){
	plus(a, b)
}

oc2 <- plus(a, b)//shorthand for onchange
ev2 <= plus(a, b)//shorthand for every

//... other code

a = 50//we change the value of a

await(ev2;ev2 == 60)//wait for ev2 to be reactively set to 60
//carry on with execution...

onchange statements will execute the code defined within their blocks when any one of the watched refs are changed. every statements operate in the same way but will trigger their code for execution on every update to a watched ref, including the initial value. Thus, when ref a is updated above, variables oc1, ev1, oc2 and ev2 will be updated with the sum of a and b, with ev1 and ev2 having previously held the initial sum of a and b.

Building an application

Now that we have the basics in order, let’s start to put them together in an application. Let’s say we’re working on financial trading systems in a typical investment bank or hedge fund. We want to quickly put together a reactive system to take ticking timestamped asset prices from a marketplace, and when the price satisfies certain criteria, perform an action. The most natural way to architect a system like this is as a reactive system which will utilize some of the special concurrency related features of the language.

Create a function

First we create a function to output some repeatability consistent pseudo random timeseries data that we can use for development and testing:

from java.util import Random
from java.time import LocalDateTime

class TSPoint(-dateTime LocalDateTime, -price double){
//class with two fields having implicit getter functions automatically defined by prefixing them with -
	override toString() => String.format("TSPoint(%S, %.2f)", dateTime, price)
}

def createData(seed = 1337){//seed is an optional parameter with a default value
	rnd = new Random(seed)
	startTime = LocalDateTime.\of(2020, 1, 1, 0, 0)//midnight 1st jan 2020
	price = 100.
	
	def rnd2dp(x double) => Math.round(x*100)/100. //nested function
	
	ret = list()
	for(sOffset in 0 to 60*60*24){//'x to y' - an integer range from 'x' to 'y'
		time = startTime.plusSeconds(sOffset)
		ret.add(TSPoint(time, price))
		price += rnd2dp(rnd.nextGaussian()*0.01)
	}

	ret
}

Above we see that we first define a class TSPoint, the instance objects of which are used to represent the individual points of the timeseries associated with our tradeable asset. Let’s check that our function outputs a sensible range of test data:

timeseries = createData()//call our function with default random seed
prices = t.price for t in timeseries//list comprehension

min = max Double? = null//max and max may be null
for(price in prices){
	if(min == null or price < min){ min = price }elif(max == null or price > max){
		max = price
	}
}

print("min: %.2f max: %.2f", min, max)
//outputs: min: 96.80 max: 101.81

When calling our function with the default random seed we can see that it outputs a reasonable intra-day range of data: "min: 96.80 max: 101.81".

Nullable types

Now is a great time for us to introduce the support that Concurnas has for nullable types. As in keeping with modern trends in programming languages, Concurnas (like Kotlin and Swift) is a null safe language, that is to say, if a variable has the capacity for being null, it must be explicitly declared as such, otherwise it is assumed to be non null. It is not possible to assign a null value to a non null type, rather the type must be explicitly declared as being nullable by appending it with a question mark, ?:

aString String
aString = null //this is a compile time error, aString cannot be null

nullable String?
nullable = null //this is ok

len = nullable.length()//this is a compile time error as nullable might be null

We see above that the call to nullable.length() results in a compile time error as nullable might be null which would cause the function invocation of length() to throw the dreaded NullPointerException. To our aid however, Concurnas offers a number of operators which make working with variables of a nullable type like our nullable variable safer. They are as follows:

len1 Integer? = nullable?.length()      //1. the safe call dot operator
len2 int = (nullable?: "oops").length() //2. the elvis operator
len3 int = nullable??.length()          //3. the non null assertion operator

These operators behave as follows:

  1. The safe call dot operator will return null (and therefore a nullable type) if the left hand side of the dot is a nullable type resolving to null.
  2. The elvis operator is similar to the safe call operator except that when the left hand side is null, the specified value on the right hand side of the operator is returned instead of null ("oops" in our example above).
  3. The non null assertion operator disables the null protections and will simply throw an exception if its left hand side resolves to null.

Concurnas is also able to infer the scope of nullability for nullable types. For areas where we have asserted a nullable variable as being not null (for instance, in a branching if statement), we are able to use the variable as if it were not nullable:

def returnsNullable() String? => null

nullabeVar String? = returnsNullable()

len int = if( nullabeVar <> null ){
	nullabeVar.length()//ok because nullabeVar cannot be null here!
}else{
	-1
}

print(len)//prints: -1

Together this support for nullable types helps us write more reliable, safer programs.

Trigger a trading operation

We shall now continue to build our trading system, we want to trigger a trading operation as soon as the tracked asset reaches a certain price. We can use an onchange block to trigger this process when the price of the asset is above 101.71:

lastTick TSPoint://our asset timeseries

onchange(lastTick){
	if(lastTick.price > 101.71){
		//perform trade here...
		return
	}
}

Notice above the use of return within the onchange block, this ensures that when the trading condition is met, the associated trading operation is performed only once and after this the onchange block terminates. Without the return statement the onchange block would trigger whenever the trading condition is met until the lastTick is out of scope.

Creating a ref

We can easily perform other interesting things along the lines of the previous pattern, for instance, we can create a ref, lowhigh of the rolling high/low prices for the day as follows:

lowhigh (TSPoint, TSPoint)://lowhigh is a tuple type

onchange(lastTick){
	if(not lowhigh:isSet()){//using : allows us to call methods on refs themselves
		lowhigh = (lastTick, lastTick)
	}
	else{
		(prevlow, prevHigh) = lowhigh//tuple decomposition
		
		if(lastTick.price < prevlow.price){ lowhigh = (lastTick, prevHigh) }elif(lastTick.price > prevHigh.price){
			lowhigh = (prevlow, lastTick)
		}
	}
}

Build an object-oriented system

Now that we have the dealing and informational components of our trading system prepared we are ready to build an object oriented system using them. To do this we are going to take advantage of the support built into Concurnas for Dependency Injection (DI). DI is a modern software engineering technique the use of which makes reasoning, testing and re-using object oriented software components easier. In Concurnas, first class citizenship support is provided for DI in the form of object providers, these are responsible for creating the graph of, and injecting dependencies into, provided instances of classes. Usage is optional but pays dividends for large projects:

trait OrderManager{	def doTrade(onTick TSPoint) void }
trait InfoFeed{ def display(lowhigh (TSPoint, TSPoint):) }

inject class TradingSystem(ordManager OrderManager, infoFeed InfoFeed){
//'classes' marked as inject may have their dependencies injected
	def watch(){
		tickStream TSPoint:
	
		lowhigh (TSPoint, TSPoint):
	
		onchange(tickStream){
			if(not lowhigh:isSet()){
				lowhigh = (tickStream, tickStream)
			}
			else{
				(prevlow, prevHigh) = lowhigh
				
				if(tickStream.price < prevlow.price){ lowhigh = (tickStream, prevHigh) }elif(tickStream.price > prevHigh.price){
					lowhigh = (prevlow, tickStream)
				}
			}
		}
		infoFeed.display(lowhigh:)//appending : indicates pass-by-ref semantics
		
		onchange(tickStream){
			if(tickStream.price > 101.71){
				ordManager.doTrade(tickStream)
				return
			}
		}
		tickStream:
	}
}

actor TestOrderManager ~ OrderManager{
	result TSPoint:
	def doTrade(onTick TSPoint) void {
		result = onTick
	}
	
	def assertResult(expected String){
		assert result.toString() == expected
	}
}

actor TestInfoFeed ~ InfoFeed{
	result (TSPoint, TSPoint):
	def display(lowhigh (TSPoint, TSPoint):) void{
		result := lowhigh//:= assigns the ref itself instead of the refs value
	}
	
	def assertResult(expected String){
		await(result ; (""+result) == expected)
	}
}


provider TSProviderTests{//this object provider performs dependency injection into instance objects of type `TradingSystem`
	provide TradingSystem
	single provide OrderManager => TestOrderManager()
	single provide InfoFeed => TestInfoFeed()
}


//create our provider and create a TradingSystem instance:
tsProvi = new TSProviderTests()
ts = tsProvi.TradingSystem()

//Populate the tickStream with our test data
tickStream := ts.watch()
for(tick in createData()){
	tickStream = tick
}

//extract tests and check results are as expected...
testOrdMng = tsProvi.OrderManager() as TestOrderManager
testInfoFeed = tsProvi.InfoFeed() as TestInfoFeed

//validation:
testOrdMng.assertResult("TSPoint(2020-01-01T04:06:18, 101.71)")
testInfoFeed.assertResult("(TSPoint(2020-01-01T19:59:10, 96.80), TSPoint(2020-01-01T10:10:05, 101.81))")

print('All tests passed!')

The above introduces another two interesting features of Concurnas, traits and actors. Traits in Concurnas are inspired by traits in Scala, here however we are simply using them like interfaces (as seen in languages such as Java) in that they specify methods which concrete implementing classes must provide. Actors in Concurnas are special classes the instance objects of which may be shared between different isolates, as actors have their own concurrency control so as to avoid non deterministic changes to their internal state by multiple isolates interacting with them concurrently.

SEE ALSO: What to look for in an OpenJDK Distro

Building a reactive system such as the above from scratch with traditional programming languages would of course be a long winded affair. As can be seen above with Concurnas this is a straightforward operation.

Domain Specific Languages (DSLs)

Another nice feature of Concurnas is its support for Domain Specific Languages (DSLs). Expression lists are one feature which makes implementing DSLs easy. Expression lists essentially enable us to skip writing dots and parenthesis around method invocations. This leads to a more natural way of expressing algorithms. We can use this in our example trading system. The following is perfectly valid Concurnas code:

order = buy 10e6 when GT 101.71

This is enabled by creating our order API as follows:

enum BuySell{BUY, SELL}

def buy(amount double) => Order(BuySell.BUY, amount)
def sell(amount double) => Order(BuySell.SELL, amount)

open class Trigger(price double)
class GT(price double) < Trigger(price)
class LT(price double) < Trigger(price) class Order(direction BuySell, amount Double){ trg Trigger? def when(trg Trigger) => this.trg = trg; this
}

order = buy 10e6 when GT 101.71

Additionally, though not covered here, Concurnas supports operator overloading and extension functions.

GPU computing support

Now let us briefly look at the support built into Concurnas for GPU computing.

GPUs can be thought of as massively data parallel computation devices that are ideally suited for performing math oriented operations on a large datasets. Whereas today a typical high end CPU (e.g. AMD Ryzen Threadripper 3990X) may have up to 64 cores – affording us up to 64 instances of concurrent computation, a comparable GPU (e.g. NVIDIA Titan RTX) has 4608! All graphics cards in modern computers have a GPU, effectively we all have access to a supercomputer. It is common for algorithms implemented on the GPU to be up to 100x faster (or more!) than their CPU implementations. Furthermore, the relative cost of this computation when performed on a GPU from a hardware and power perspective is far lower than its CPU counterpart.

There is however a catch… GPU algorithms have a relatively esoteric implementation and the nuances of the underlying GPU hardware must be understood in order to obtain optimal performance. Traditionally, knowledge of C/C++ has been a requirement. With Concurnas things are different.

Concurnas has first class citizen support for GPU computing, meaning that support is built directly into the language itself to enable developers to leverage the power of GPU’s. Thus we can write idiomatic Concurnas code and have syntactic and semantic checks performed at compile time in one step, greatly simplifying our build process and eliminating the need for us to learn C/C++ or rely upon runtime checks of our code.

GPU algorithms are implemented in entry points known as a gpukernel. Let’s look at a simple algorithm for matrix multiplication (a core component of linear algebra which is heavily used in Machine Learning and finance):

gpukernel 2 matMult(wA int, wB int, global in matA float[2], global in matB float[2], global out result float[2]) {
	globalRow = get_global_id(0) // Row ID
	globalCol = get_global_id(1) // Col ID

	rescell = 0f;
	for (k = 0; k < wA; ++k) {//matrices are flattened to vectors on the gpu...
		rescell += matA[globalCol * wA + k] * matB[k * wB + globalRow];
	} 
	// Write element to output matrix
	result[globalCol * wA + globalRow] = rescell;
}

This GPU kernel presents a succinct but naive implementation. The code can be optimized to improve performance significantly, for instance, through the use of local memory. For now though this is good enough. We can compare this to our traditional CPU based matrix multiplication algorithm as follows:

def matMultCPU(A float[2], B float[2]) { 
	n = A[0].length
	m = A.length
	p = B[0].length
	result = new float[m][p]
 
	for(i = 0;i < m;i++){
		for(j = 0;j < p;j++){
			for(k = 0;k < n;k++){
				result[i][j] += A[i][k] * B[k][j]
			}
		}
	}
	result
}

The core matrix multiplication algorithm is the same across the GPU and CPU implementations. However, there are some differences: the GPU kernel itself is executed in parallel on our GPU with the only distinction in those individual parallel executions being the values returned from the get_global_id calls – these are used to identify which data of the data set the instance should be targeting. Additionally, return values need to be passed into GPU kernels.

Now that we have created our GPU kernel we are able to execute it on the GPU. This is more involved than standard CPU based computation in that we are setting up an asynchronous pipeline of data copying to the GPU, kernel execution, results copying from the GPU and finally cleanup. Luckily Concurnas leverages the ref model of concurrency in order to streamline this process which serves well in also letting us: keep our GPU busy (thus maximizing throughput), use multiple GPUs concurrently and do other CPU based work at the same time as GPU execution:

def compareMulti(){
	//we wish to perform the following on the GPU: matA * matB
	//matA and matB are both matrices of type float
	matA = [1f 2 3 ; 4f 5 6; 7f 8 9]
	matB = [2f 6 6; 3f 5 2; 7f 4 3]

	//use the first gpu available
	gps = gpus.GPU()
	deviceGrp = gps.getGPUDevices()[0]
	device = deviceGrp.devices[0]

	//allocate memory on gpu
	inGPU1 = device.makeOffHeapArrayIn(float[2].class, 3, 3)
	inGPU2 = device.makeOffHeapArrayIn(float[2].class, 3, 3)
	result = device.makeOffHeapArrayOut(float[2].class, 3, 3)
	
	//asynchronously copy input matrix from RAM to GPU
	c1 := inGPU1.writeToBuffer(matA)
	c2 := inGPU2.writeToBuffer(matB)
	
	//create an executable kernel reference: inst
	inst = matMult(3, 3, inGPU1, inGPU2, result)

	//asynchronously execute with 3*3 => 9 'threads'
	//if c1 and c2 have not already completed, wait for them
	compute := device.exe(inst, [3 3], c1, c2)

	//copy result matrix from GPU to RAM
	//if compute has not already completed, wait for it
	ret = result.readFromBuffer(compute)
	
	//cleanup
	del inGPU1, inGPU2, result
	del c1, c2, compute
	del deviceGrp, device
	del inst
	
	//print the result
	print('result via GPU: ' + ret)
	print('result via CPU: ' + matMultCPU(matA, matB))
	//prints:
	//result via GPU: [29.0 28.0 19.0 ; 65.0 73.0 52.0 ; 101.0 118.0 85.0]
	//result via CPU: [29.0 28.0 19.0 ; 65.0 73.0 52.0 ; 101.0 118.0 85.0]
}

Closing thoughts

This concludes our article for now. We’ve looked at many of the aspects of Concurnas which make it unique though there are many more features of interest to the modern programmer such as first class citizen support for distributed computing, temporal computing, vectorization, language extensions, off heap memory management, lambdas and pattern matching to name a few.

Check out the Concurnas website or dive straight into the GitHub repo.

Author
Jason Tatton
Jason is the creator of the Concurnas Programming Language and the founder of Concurnas Ltd. He wrote his first computer program aged 9 and has been coding ever since for over 25 years. Jason has written and led teams developing algorithmic trading systems for some of the world's most prestigious investment banks including Bank of America Merrill Lynch, Deutsche Bank and J.P. Morgan. He is passionate about technology, programming and making Concurnas the best programming language it can be. When not building Concurnas or consulting he enjoys spending time with his family.

Leave a Reply

avatar
400