Part IIb: Catching Disinformation
The second part of the Ethika AI development is all about disinformation. But before I launch a whole new chapter about disinformation, let me break this down into sub chapters. The future of AI is not built on sophisticated coding routines, the future of AI is inclusive. Inclusive as in a symphony between computer components, advanced coding routines, arranged and perfected through human ingenuity. At THEMATIC AI we define our AI, Ethika AI, as an Inclusive AI system that seamlessly combines safety, security, and versatility. I made the introduction about our design decision to focus on safety and security, effectively bypassing public concern about AI behavior without human oversight. Let me explain versatility as the architects of Ethika AI and why I use the term Inclusive AI.
To fully appreciate and make use of AI, such a system would factor in the capabilities of computer components, the digital pathways to align and optimize the stream of data between components and computations with reverse feedback of combined data output and so on.
With the recent ChatGPT v4 release to the public, its data collection is from large databases stored in cloud server pools. AI data research is very much in the stage, and continues rightly, to improve accuracy. This refinement, even with excellent optimizing coding structures, is a huge and time-consuming task. Industry groups with a focus on research, communications, data investigations or narrow data retrieval requirements do not benefit from this large data pool. The bulk of cloud data might be stored somewhere but is difficult to pinpoint because of the complexity and interconnection with too many data access points. In other words: the term making sense out of raw data is applied here. In our design approach to make Ethika AI the most accurate, safe, and versatile system, the AI capabilities are extended by introducing sensors and circuitry with a specific purpose to collect data.
I name the examples of advanced CMOS photocells, environmental sensors, motor components, and semi-conductors to improve accuracy feedback to the AI system. Ethika AI is from start ready to accept industrial data exchange, the same coding-based on Apache's MXNET is easily expanded to cater different industries: from environmental detection to support better modelling and tracking of climate changes, rapid and efficient sampling of disease specimen, monitoring of hazardous environments to conduct and support investigations of (war) crime sites or residential environmental improvements.
The word is versatile, as the architects designed an AI system, Ethika AI, capable of real-time fact-checking its environment to enhance the feedback from Human Experts. An Inclusive AI system capable of matching demanding customer profiles from different industries.
PART IIb: Educating the Ethika AI system about Disinformation.
In 2019, I was in Hong Kong working on several technology projects just before the city-wide protest started. It was a defining moment for a territory that had seen many difficulties, under the British from 1841, during the Japanese occupation of WWII, and again under the British rule until the formal takeover by China in 1997.
Just imagine a Covid lockdown but with the added explosions, noise, police, and citizens fighting for a form of freedom of speech and democratic reforms well known to the rest of the Western world. Hong Kong became a ghost city instead of the vibrant place it was allowed before 1997, and in 2020 the Chinese authorities grabbed the chance to strike down any resistance because of the Covid. No ordinary life was possible from that moment, and most citizens either escaped confinement by leaving the area or were tied to their homes. Employers finally agreed to allow workers to remain at their homes, a very un-Chinese principle, but this was Hong Kong, not an ordinary Asian city but a city that had seen poverty, growth, and prosperity on such a scale to the envy of other Asian countries.
Our group made inroads to make the most of the situation by redesigning our projects with the added restriction that going out to test a system would not be possible. Ethika AI was born at that time and required not more than a server environment, some fast laptops, several electronic cameras, and most importantly a bit tinkering of NVIDIA Jetson micro-systems. We understood that complete AI designs would not be feasible, our budget restraints stopped us from installing high performance server pools that could deliver on demand the necessary bandwidth to allow for large library data sets. Today ChatGPT 3 and recently 4 by far surpasses ChatGPT in its original form with its advanced reasoning capabilities. We decided to focus on safety and alignment to produce relevant output without omitting the required step of human feedback to train Ethika AI. In comparison to OPENAI's ChatGPT, our team never left the human element of safety and security. It is Ethika AI's strongest card, to be deployed in any industry with the safety and security of human feedback trained in the AI model, all custom tailored.
Unlike competitors we have the advantage of both understanding electronics and AI programming to secure a well-balanced AI model like in Ethika AI. Back in the Netherlands, we founded THEMATIC AI limited to enable further development of the Ethika AI system. By now we had figured out that several wearable biosensors (WBSs) could be smartly placed to allow for greater feedback of human input to train Ethika AI. While Ethika AI core coding is based on Apache's MXNET, the architecture is extremely flexible to port and align micro-device sensors to train the AI model to a level of unseen refinement. Ethika AI first stores the human feedback from eye-contact tracking, hand mouse movement tracking, and face expression in any session with a Human Expert. The data feeds are then processed according to standard AI modelling rules but iterate upon each session in a combination of stored mimic and new human feedback from different Human Experts. Human Experts play a pivotal role in the accuracy and acceptance of content, text, graphics, movies, or photos. While other AI systems reuse and train by adding filtered data, Ethika AI already bypasses this because the Human Expert inspected the data to form reliable content.
The design of the Ethika AI relies on the use of camera input that records in real-time the eyes and facial expressions, and in real-time covers the keyboard with mouse gesture movements.
In this case we use NVIDIA's Face Recognition, Broadcast, Voice Commands, and keyboard mouse input to capture data from the Human Expert. Ethika AI then collects, inspects, and points and clicks from the Human Expert who is actively processing content on the web. This all occurs in real-time, the Human Expert can at any time pause the recordings.
As I explained, we are not only good at designing AI models but have deep knowledge about computer architecture. To power the real-time capture of Human Expert processing of content, we rely on NVIDIA's Jetson series.
NEXT: Part II, Catching Disinformation
Journalist and Chief Editor at THEMATIC AI