A brain in a vet learns to play pong
The December AI News edition delivers an interesting perspective on Natural Language Processing (NLP) and the ethical consequences of the spread of this technology. All this along with an interesting comparison between artificial and biological neural networks.
The end of the year has also come with an interesting look at synthetic data and data generation for the sake of training machine learning models, in fact bringing forth the reality of machine learning devices delivering data for the sake of machine learning.
And also Meta, the company behind Facebook, delivered a bean machine.
In search for more transparent language models for NLP
While natural language processing models are one of the milestones of artificial intelligence, on the other hand, these models are not mistake-proof and can come with multiple hidden biases and risks.
Recent models, like GPT-3, excel in producing human-like texts, delivering high-quality samples indistinguishable from human-written material. Yet the models struggle when tasked with writing about issues like history and tend to show a prejudice regarding race, ethnicity, and religion.
To deal with this challenge, a group of researchers backed by Google has delivered a framework called “Attributable to identified sources” where the user can check if a statement is backed by any identifiable source used to train the model, be that Reddit or “Romeo and Juliet.”
While this approach does not guarantee that there will be no biases or hidden prejudices in a model, it does allow one to track the source of unexpected results and statements in order to make avoiding such incidents much easier in the future.
The full paper can be found on Arxiv.
The data problem and the synthetic solution
According to a report delivered by Datagen, nearly every image recognition team (99% to be precise) has experienced a project cancellation due to insufficient data. The top issues experienced by teams include poor annotation (48%), lack of domain coverage (47%), and the scarcity of data (44%).
An answer can be found in generating synthetic data. Generative Adversarial Networks (GANs) are increasingly effective in delivering lifelike images of nearly everything – from dogs, cars, and houses to facial expressions. This, delivered along with data augmentation techniques, can enable computer vision solutions to enter new market segments and ensure a wide enough representation of minorities in the datasets used in training.
More details regarding synthetic data can be found in this report.
Bean machine – measuring the Machine Learning model’s uncertainty
One of the interesting challenges regarding AI models is the fact that there is a great deal of uncertainty behind confidence in delivered answers. The degree of this tends to be hard to measure. On the other hand, though, the information on the level of uncertainty can be crucial for a data analyst, especially when an inaccurate prediction can be followed by tragic repercussions.
Meta, the company behind Facebook, has delivered a solution to this problem by delivering a bean machine – a framework that enables the analyst to measure the level of uncertainty in the prediction in an easier manner. The goal of the framework is to discover the unobserved properties of a model with automatic learning algorithms.
The name of the framework is inspired by a real device, where a customer can buy a single bean, yet it is impossible to determine the prize hidden inside.
More details regarding this technology can be found in this blog post.
Modern, sophisticated language models can produce texts of high quality and on multiple topics – starting from fiction and fantasy to press releases on construction, engineering, or the economy.
But the Large Language Models (LLM) can also be tuned to deliver personalized and highly targeted propaganda in the way a skilled writer could do. The text can be tweaked in whatever way the creator wills and deliver credible-sounding arguments for any statement given – even a stupid or dangerous one.
While this threat has been menacing NLP, a study delivered by Cornell University is a case study on spinning LLM to deliver a Propaganda-as-a-Service model. The model is triggered by certain activation words delivered in the input, so the neural network delivers the text with a particular spin in the response, preserving the context and the sense of the conversation. For example, the model can always produce a positive response when a particular name and surname are mentioned. By doing so, it can be used to poison online platforms with propaganda.
The paper highlights the need to deliver a reliable and fair ethical framework to work with language models. Also, it brings out a new threat posed by AI used by malicious actors.
The full paper can be read on Arxiv.
Brain in a vat learns how to play Pong
In fact a “brain in a vat” may be an overstatement here. The experiment was conducted using living brain cells, not an entire brain. The cells were placed in a dish, not in a vat. But the rest resembles a famous thought experiment initiated by Rene Descartes.
According to his intuition, if there were a genius who placed a brain in a vat and delivered all sensory information through electrodes or other means to the brain, thus mimicking reality, the brain would have no clue that it was living in a great simulation.
The motif is common in popular culture, with the Matrix franchise being one of the top examples of the creative usage of it. But this concept has not left the phase of thought experiment until recently.
A group of researchers from Cortical Labs in Australia managed to grow brain cells on top of electrode arrays that can stimulate the cells in the desired way. The stimulation delivered mimics the “pong” game, with an organic neural network controlling the paddle.
The research has shown that natural cells will manage to learn how to play pong in about five minutes. This research also provides an interesting perspective on augmenting silicone arrays with bio components, effectively creating a form of cyborg.
The full research can be found on the bioRxiv page.