Artificial Intelligence Can Improve Itself Using Only a Few Data Samples

Engineers at Gamalon have developed a programming technique that allows artificially intelligent machines to self-improve their knowledge without needing too much data.

Engineers are resolute in improving AI programming, endlessly developing techniques to help it build itself. The most common technique employed by AI-powered devices is the deep-learning algorithm. Deep-learning algorithms often use a large number of data sets to create associative patterns. The system requires repetition of results and tremendous amounts of data inserted into the neural network to create a proper inference.

Google’s “Quick, Draw!” app is an example of a neural network that uses a deep-learning algorithm to process drawings. The app will ask you to doodle a random object and it will try to guess what you’re drawing. It uses patterns it has generated by scanning previous drawings of the same object. As more and more people use the app, more data is stored in its repository, helping it make more accurate assumptions.

Gamalon tried a different approach and this new technique utilizes Bayesian Program Synthesis (BPS). Named after Thomas Bayes, an 18th-century mathematician, the framework revolves around continuously updating the probability of a hypothesis as more information related to the theorem arises.

Using the same structure of reasoning in probabilistic programming, the code needs only a few sample data to come up with an inference. This is the biggest difference between the two, with the deep-learning system requiring more information than the BPS.

Similar to how deep-learning systems evolve, the BPS also gets more intelligent as more data is read by the program. This learning process enables the program to upgrade itself by digesting more information to deliver more accurate results.

For comparison, researchers at Gamalon tested their system against “Quick, Draw!”

They tried drawing a floor lamp beside a chair but Google’s system failed to recognize it, providing a drawing of a house and a church instead, as the closest things related to the experimental lamp drawing. The main reason behind this failure is simply due to the system’s inability to recognize the two doodles as separate objects. The system is trying to combine them into one when it should really be two different items.

Gamalon BPS was subjected to the same test and it was able to recognize the lamp and the chair separately. Even if the drawing of the chair was altered, the system was still able to recognize both objects. This was achieved by teaching the system to recognize simple visual concepts first. By teaching it to recognize each part of a chair and a lamp, it was able to recognize the object even if they varied in size and style.

The use of probabilistic programming may have a significant impact on the development of machine learning processes for artificial intelligence. It can be applied on various fields like structuring business data and speech recognition.

The development of the BPS has created new avenues for research and we’re all eager to see where this development will lead us in the realm of AI technology.