

Get to know our team
About Us
The development of computers that can understand language has been a long-standing challenge due to the complexities of nonverbal communication and social context. To overcome this, various technologies such as computer vision, sound processing, and behavioral cues must be integrated into computers. The application of these technologies enables computers to understand spoken language and social context, which is far more complex than simply learning words and grammar.
Successful integration of these robots into industry is another issue. For example, in construction robots must be trained to identify and navigate critical structures that are not geometrically defined. To accomplish this, robot manufacturers must create clear, micro-scene training areas with realistic simulations. This allows the robots to gain the necessary knowledge in a safe, risk-free setting before being utilized in the real world.
Robots that can interpret spoken language can provide aid without needing to be prompted and are therefore much easier to teach. They can interpret verbal commands and take appropriate action – this eliminates the need to control them with joysticks or to pre-program their movements “robotically.”
Similarly to humans, for a robot to become adept at spoken language, it must be given the chance to learn from its mistakes. It must understand that breaking something is to alter its shape from its original form and that this can bring about negative reactions. During the learning period, when the robot breaks something, someone should exclaim: “What did you do?!” This helps the robot to recognize the context of the intonation, thus preventing it from making the same mistake again. Just like any human being, a robot can avoid significant – and potentially dangerous – errors with the right training.
Our software revolutionizes assistive robotics by enabling robots to comprehend spoken language, interact naturally, and perform physical tasks – all without requiring any displays or additional steps. We are creating an easier way to communicate and interact with advanced functions such as those used in construction and medical care, thereby opening up tremendous opportunities for users all over the world.
How do children acquire language?
How do they recognize objects and differentiate between them?
What enables them to sense danger?
Is there a reason that robots should not be able to do these things?
The Idea:
Computers use a statistical approach when comprehending written text. They assess the probability of one word appearing close to another such as ‘sky’ and ‘blue’ or ‘sun’ and ‘bright’. They repeat this process separately for every new language they learn. Children, on the other hand, cannot read, but they still manage to learn languages better than any computer.
Computer vision is based on reading labels in a specific language, but it is not sufficient to acquire a language because children cannot read. So how do they learn language? They name objects by sound!
In June 2022 the RO- Folks’Talks company was founded and established, based on these ideas.
Folks’Talks is going to make robots capable of understanding the world around them just like a child would, in any language, without relying on text.
Our method of acquiring language focuses on the robot learning from mistakes rather than relying on statistics, which will ultimately transform it into a highly intelligent assistant.
Our Team:

Chaim Ash
CEO

Lev Veyde
Robot Interface

Amelia Hans
CMO

Sergii Paradiuk
Computer Vision

Felix Sorokin
Business Development

Attempts to bring assistant robots to hospitals have been made. However, these robots do not understand causality, cannot comprehend spoken languages, and require attention themselves. Therefore, they lack safety and necessitate human involvement.
The way your assistant robot will work with the Folks’TalksTM API installed:
Command your robot verbally and it will intelligently obey. The Folks'TalksTM API provides a realistic logic comprehension integrated into any spoken language and is adjustable to meet staff needs.
Current Solutions:
Folks’TalksTM business model:
-
License per year (ACV TBD)
-
Data in-house
-
B2B – assistant robots’ providers
-
B2B2B – hospitals, nursing homes
-
B2B2C – people with special needs