“Brain Simulator II is an open source biologically modeled neuron simulator”
We spoke with Charles Simon, founder of FutureAI.guru, about the current state of artificial intelligence and machine learning, Brain Simulator II, and more. Charles Simon discusses “Sallie”, an artificial entity being developed on the FutureAI platform, and what applications can benefit from it.
FutureAI.guru is developing software to fill in the areas where artificial intelligence (AI) has fallen short. This includes fundamental comprehension of real-world objects, cause and effect, the passage of time—in short, the real-world context with which we humans understand the world.
JAXenter: Thank you for taking the time to answer our questions. First of all, what is Brain Simulator? What does it accomplish?
Charles Simon: Brain Simulator II is an open source biologically modeled neuron simulator with the added capability of incorporating any desired software and functionality. The primary value of the neuron simulator is to explore the capabilities and limitations of biological neurons so we can focus on the capabilities of the human brain which AI lacks.
Initially, it has proven that Machine Learning algorithms are impossible to implement in a biological neuron model while other AI methods such as knowledge graphs are much more likely to exist in the brain.
The open-source Brain Simulator is available for download and will help all AI professionals get a better perspective on how today’s AI compares with plausible neural functions.
SEE ALSO: Will AGI Be a Friend or Foe?
JAXenter: Can you tell us about the latest advancements in the Brain Simulator technology regarding 3-D object comprehension?
Charles Simon: The Brain Simulator also includes the ability to shortcut the function of any cluster of neurons with a high-level program.
For example, the human brain devotes potentially hundreds of millions of neurons to depth perception which can be accomplished in code with a few lines of trigonometry. The current development on BrainSimulator3 has extended the system with a Knowledge Graph (the Universal Knowledge Store or UKS) to handle objects in a 3-D world with an internal mental model so, like a person, the system can know about objects in the immediate surroundings.
JAXenter: Why is 3-D comprehension so difficult to achieve in AI models and what opportunities will it open up?
Charles Simon: Actually, the data model of a 3D world is well known and used in games all the time. The difficulties are in relating these to real-world complexity and ambiguity.
Add to this, the idea that in human intelligence, what we know is only in the context of other things we know. Really basic things like a cm or inch, we know in relation to the size of our fingers or the sizes of the objects they measure (or to be academic, in terms of the wavelengths of light which define them). A cm in the abstract isn’t very meaningful. So in the world of human-like understanding, starting with a grid of coordinates is the wrong direction.
JAXenter: Could you explain a little how FutureAI’s “Sallie” works and what it is?
Charles Simon: Sallie is our name for the artificial entity being developed on the FutureAI platform. She consists of a “mind” which resides on a substantial computer and a variety of sensory “pods” through which she can learn about the real world.
The sensory pods are connected to the mind via WiFi and let Sallie explore and interact with her environment and the objects in it to learn about the fundamental concepts of reality. It’s through this interaction and “play” that Sallie will gain a better fundamental understanding.
JAXenter: What are some applications that could benefit from Sallie?
Charles Simon: It’s unrealistic that any AI which is trained on images would ever gain a fundamental understanding of the real world, or sound for that matter.
Because of Sallie’s multisensory pods, over the coming months she will gain this fundamental understanding of the relationships between all the information she learns from the environment. This fundamental understanding can then be applied to many areas such as personal assistants like Alexa or Siri to make them better because with an understanding they will be better at comprehending user questions and won’t be so script-based.
Self-driving vehicles will be better at handling real-world scenarios. Robots will be better at navigating and interacting with people and their environment. Even the more basic AI functions of speech recognition and computer vision will be able to work better because the underlying comprehension will give them a leg up on interpreting their input as well.
JAXenter: In your opinion, is there any cause for concern regarding AGI and current AI developments? Is there any way it could be potentially be misused on a large scale?
Charles Simon: Concern about the risks of AGI is very reasonable but is often misplaced. Like any powerful technology, AGI can be misused if initially placed in the wrong hands. The science-fiction scenario of machines becoming sentient and spontaneously turning on their creators is unlikely because all these systems are goal-based and the system creators will set the initial goals. If these goals are to provide and extend knowledge for the betterment of mankind, this is completely different from the much more risky goals of attaining additional wealth and power.
JAXenter: What’s on the roadmap for the rest of 2022 at FutureAI?
Charles Simon: FutureAI has just completed its first financing round with $2 million in equity funding and has staffed up to 10 people with lots of functional software and a prototype pod. In the next quarter, we’ll be extending our Sallie prototype with additional exploration capabilities and improved pods to create “glimmers of understanding”.
In the remainder of the year, we’ll be building a dataset of things which Sallie has learned and expanding it to progressively more diverse and useful knowledge. Then we’ll be targeting specific applications with Sallie’s general knowledge.