June has brought a whole heap of interesting news regarding the future of programming and coding. Apparently, there is a way to automate the process and change the role of software engineers using AI-powered tools.
This edition also delivers information about low-code or no-code platforms and their future, predictions regarding AI ethics, and a short story about AI watching YouTube for educational purposes.
Github and Copilot
The foundation of an AI system is the ability to extract patterns from a gargantuan amount of data and, further, to use it for the benefit of the end-users. A good example is Google’s auto-suggest which, as the name implies, automatically suggests queries one might wish to run in their search engine.
Github, probably the world’s most sophisticated code repository, has launched Copilot, a new feature that delivers the wonder of code autocompletion to the user. The mechanics of the feature are similar to those seen in natural language processing – the AI-based system predicts what code is to follow by leveraging a vast database of code previously seen.
But there are more similarities to Gmail’s text autocompletion – Github aims not to deliver a tool that writes code instead of the software engineer but rather auto-completes his or her work in order to speed up production. Also, the vendor gives no warranty that the code will actually work. That is the programmer’s job after all.
Copilot is available on the Microsoft Visual Studio Code platform, accessible either in the cloud or locally. More about the feature can be seen in the GitHub blog post about their new Copilot.
The rise of the no-code era
While coders are getting powerful auto-suggestion tools, their work may be leaning towards obsolete, at least according to Gartner’s predictions. The company has delivered an analysis that suggests that by 2024 up to 80% of new tech products will be delivered by non-tech specialists who have learned to use low-code or no-code platforms.
This trend is being driven by business roles that are traditionally outside of the tech ecosystem, usually by professionals who need to automate or support their daily operations yet lack the coding skills to build their own tools from scratch. The same mechanism stands behind the fact that Excel is actually the most popular and widely used programming language in the world.
The no-code or low-code platforms boost this trend by enabling non-tech companies to build their own innovative solutions without the need to build skilled tech teams in-house. Thus, according to Gartner, tech companies will quickly find themselves competing with non-tech players in fields like retail or fintech.
More about this change and predictions regarding this upcoming reality can be found on Gartner’s website.
The Pew Research Center doubts that ethical AI will be widely adopted by 2030
The need to design ethical and fair AI models is growing, especially with the rising questions about hidden biases or prejudice in both models and datasets. According to the experts surveyed by the company, the ability to predict or control the behavior of people will remain a primary goal in AI-based solution design, with ethics or fairness being of a significantly lower priority.
To be more precise – according to 68% of surveyed experts, “ethical principles focused on public good” (the exact question) will NOT be adopted. This puts Tooploox ahead of the rest of the market, with our team following stringent AI ethics principles.
On the other hand, the same group of experts tries to highlight that despite the existence of technology-aligned ethical questions, society has always found a way to overcome these challenges.
The full report and more about the survey can be found on Pew Research Center’s page.
AI predicts human behavior from youtube videos
It is relatively easy for people to predict a person’s next action from their body language. A greeting is a great example – when accompanied with a smile and friendly gestures, a sudden fist is not a sign of a blow to be struck, but of a fist bump.
But what comes naturally for people can be a huge challenge for machines. To tackle the problem of misinterpretations, a team of researchers from Columbia University’s School of Engineering and Applied Science trained an AI solution to predict the upcoming activities of an observed person from his or her body language.
As a dataset, the algorithm used videos from YouTube, with countless social situations depicted. More about the research can be found in the paper published on Science Daily.
Google uses reinforcement learning in chip floorplanning
Chip floorplanning is an unobvious challenge in modern electronics. The core concept is focused on putting the basic functional elements of a chip in the right place and order. The problem can be seen both as an engineering challenge that requires specialized knowledge and experience and a mathematical one. From the latter perspective, it is a combinatorial challenge.
And despite the work already done on the subject, it has remained challenging to automate or even aid in the essential floor planning.
To overcome this challenge, Google used a relatively unintuitive approach – reinforcement learning.
What is reinforcement learning?
Reinforcement learning is an AI technique where the AI gathers experience from interactions with the environment, most often delivered by a simulator. The designer provides the AI-controlled agent with a reward function, which provides it either with a reward or penalty for performing desired and undesired actions.
Floorplanning, at its core, is about placing a finite number of elements on a two-dimensional board. The goal is to place them in a way that provides the desired synergies, like cutting energy consumption or response time.
Knowledge of the desired outcome allowed Google researchers to shape the reward function and come up with the optimal design after a long session of trials and errors. More details about this research can be found in the article published in Nature.