In 2019, I was in Hong Kong working on several technology projects just before the city-wide protest started. It was a defining moment for a territory that had seen many difficulties, under the British from 1841, during the Japanese occupation of WWII, and again under the British rule until the formal takeover by China in 1997.
Just imagine a Covid lockdown but with the added explosions, noise, police, and citizens fighting for a form of freedom of speech and democratic reforms well known to the rest of the Western world. Hong Kong became a ghost city instead of the vibrant place it was allowed before 1997, and in 2020 the Chinese authorities grabbed the chance to strike down any resistance because of the Covid. No ordinary life was possible from that moment, and most citizens either escaped confinement by leaving the area or were tied to their homes. Employers finally agreed to allow workers to remain at their homes, a very un-Chinese principle, but this was Hong Kong, not an ordinary Asian city but a city that had seen poverty, growth, and prosperity on such a scale to the envy of other Asian countries.
Our group made inroads to make the most of the situation by redesigning our projects with the added restriction that going out to test a system would not be possible. Ethika AI was born at that time and required not more than a server environment, some fast laptops, several electronic cameras, and most importantly a bit tinkering of NVIDIA Jetson micro-systems. We understood that complete AI designs would not be feasible, our budget restraints stopped us from installing high performance server pools that could deliver on demand the necessary bandwidth to allow for large library data sets. Today ChatGPT 3 and recently 4 by far surpasses ChatGPT in its original form with its advanced reasoning capabilities. We decided to focus on safety and alignment to produce relevant output without omitting the required step of human feedback to train Ethika AI. In comparison to OPENAI's ChatGPT, our team never left the human element of safety and security. It is Ethika AI's strongest card, to be deployed in any industry with the safety and security of human feedback trained in the AI model, all custom tailored.
Unlike competitors we have the advantage of both understanding electronics and AI programming to secure a well-balanced AI model like in Ethika AI. Back in the Netherlands, we founded THEMATIC AI limited to enable further development of the Ethika AI system. By now we had figured out that several wearable biosensors (WBSs) could be smartly placed to allow for greater feedback of human input to train Ethika AI. While Ethika AI core coding is based on Apache's MXNET, the architecture is extremely flexible to port and align micro-device sensors to train the AI model to a level of unseen refinement. Ethika AI first stores the human feedback from eye-contact tracking, hand mouse movement tracking, and face expression in any session with a Human Expert. The data feeds are then processed according to standard AI modelling rules but iterate upon each session in a combination of stored mimic and new human feedback from different Human Experts. Human Experts play a pivotal role in the accuracy and acceptance of content, text, graphics, movies, or photos. While other AI systems reuse and train by adding filtered data, Ethika AI already bypasses this because the Human Expert inspected the data to form reliable content.
The design of the Ethika AI relies on the use of camera input that records in real-time the eyes and facial expressions, and in real-time covers the keyboard with mouse gesture movements.
In this case we use NVIDIA's Face Recognition, Broadcast, Voice Commands, and keyboard mouse input to capture data from the Human Expert. Ethika AI then collects, inspects, and points and clicks from the Human Expert who is actively processing content on the web. This all occurs in real-time, the Human Expert can at any time pause the recordings.
As I explained, we are not only good at designing AI models but have deep knowledge about computer architecture. To power the real-time capture of Human Expert processing of content, we rely on NVIDIA's Jetson series.
NEXT: Part II, Catching Disinformation