January has delivered an impressive amount of Natural Language Processing (NLP) related news, where language itself was not the focus.
This month has shown us how NLP applications deal with entities that are not exactly natural language. Particularly, the relationship between images and the viral genome has been explored, among others.
Big time in pairing image processing and natural language processing
Apparently, OpenAI has been busy through January, delivering an interesting model that combines Natural Language Processing with Image processing and image recognition.
DALL-e – what you say is what you get
The neural network, dubbed Dall-e, aims to deliver an image from a provided description. The 12 billion parameter neural network has been trained on image-description pairs.
The neural network works well in various ways. It can deliver a drawing or a photograph-like image, so it does a good job both when asked for “a sketch of a cat” or “a store with an ‘OpenAI’ sign.”
The network has also shown an interesting capability for delivering less intuitive images, like “a chair of an avocado shape.” Also, it demonstrated an interesting ability to create hybrids and less-common combinations like “a snail in the shape of a harp,” or “a horseradish in a tutu walking a dog,” which was also created.
What makes this neural network unique is its ability to work with more abstract or even non-existing contexts, such as the avocado-shaped chair or the horseradish in a tutu.
The name “Dall-e” was a portmanteau of Salvador Dali and Pixar’s Wall-e. A full description of the neural network is available on OpenAI’s website.
Concept whitening to deliver more interpretable neural networks
Any AI-based solution can be seen as a black-box – it works, but no one is sure quite how or how it is possible for the system to produce an unexpected result. With the increasing number of AI real-life applications, the risk of glitches and unexpected consequences coming from a lack of network’s interpretability increases. Amazon’s infamous AI-powered resume scanner that appeared misogynous is only the first to come to mind as an example of why AI solutions need to be built in a responsible way.
A paper published in the peer-reviewed “Nature Machine Intelligence” journal shows a way to deliver more interpretability while not introducing entirely new ways to train a network. Instead, an additional layer is added to the network which delivers hints regarding the reasoning and computations leading to a particular effect.
The key component is an additional layer that retains the information concerning key concepts crucial for the task – for example when describing a bedroom, it would be a “bed,” a “window,” or a “curtain,” among others. And rather not a “spaceship.” Consequently, when it comes to CV analysis, the key concept would be “experience,” “education,” or “skills,” rather than “gender” or “ethnicity.”
The concept layer is a form of artificial supervisor that ensures “reasonable reasoning” in the neural network by “whitening” the latent space, which could be hiding various glitches and mistakes.
Historical text summaries delivered by machine
One of the key problems in analyzing a historical text is the need to know historically accurate contexts. In fact, reading historical texts, be they Latin, ancient Greek, classical Chinese, or medieval German, is impossible without delivering an apparatus that brings out the meaning hidden in between the words.
Also, the challenge lies in delivering a summary in a contemporary language, not a historical version of it, where words change their meanings far more frequently than it appears at first glance. The work is hard and unrewarding ( usually being done through proper research, in the preparation phase). And that’s why researchers aimed to automate this process with neural networks.
To do so, the network has been trained on a database of historical German and Chinese texts as well as their best-class summaries delivered by historians.
When later evaluated by experts, the texts appeared to deliver a satisfying level of integrity and consistency, significantly boosting the speed of the process. More about this neural network and the associated research can be found in this Arxiv paper.
Approaching viruses in an NLP way
Ever wondered why some vaccines last for one’s entire life while others (like the common flu) need to be taken once each year? The answer is in the viruses’ ability to mutate – to change their structure fast enough to escape the antibodies already produced. The process is literally known as “viral escape.”
Viral escape is the sole reason behind a lack of vaccines for multiple modern illnesses.
So what does this all have to do with natural language processing (NLP)? The key is that the machine is processing a language regardless of its comprehensibility to humans, and a DNA sequence is, in fact, a sophisticated language encoding proteins that will further construct a living (or not quite living) being, with viruses being no exception.
In this particular case, the “grammar” would be responsible for determining whether a particular protein is functional or not, and “semantics” would determine if the protein is likely to be changed in the course of mutation. The virus is unlikely to change the functions responsible for its survival, like replication, and identifying these sequences would deliver a more universal vaccine than previously.
Employing NLP models to read the genome enables researchers to identify which sequences are the most likely to change in the future, and which are not, making the vaccine potentially more effective than it would normally be.
More about the research can be found on MIT News.
The stories above show that a machine’s understanding of natural language can be far from an intuitive version of it. The use cases mentioned above vary, from building images from the description only – a foundation of every graphic artist’s work – to reading long-forgotten versions of natural languages, to reading the DNA code – a language that is far from human comprehensibility.
But in fact, it would be foolish to not call it natural.
You may also enjoy: