Learning to see the world!

 

One of our goals in Nibblity is to make the space around you matter in the game. As technology develops, the devices we carry with us become capable of even more amazing things. We wanted to find ways to create meaningful gameplay experiences that are two-way interactions with the environment.

Placing your Nibblins in the world is the first step. Through augmented reality we allow players to put their adorable Nibblins on any flat surface.

We quickly focused in on the camera feed and what new features we develop for it. Could the Nibblins be aware of where they are, and what they were seeing? We adopted the use of machine learning technology to try and identify what objects the player was looking at. You might have seen similar technology in use in some of your favorite photo apps where you can start searching your albums for faces or objects.


Machine learning algorithms are trained by using thousands, or even hundreds of thousands of images to match labels. Once the computer has been told which images match those labels, and which ones don’t, it develops a model. This is basically a set of rules the computer will run through each time you ask it what it’s seeing. If I showed you this image:

 
 
photo by an_vision

photo by an_vision

 

Then asked you what it was, most people would quickly say it’s an apple. But how did you know it’s an apple? Its shape? Its color? The texture? Maybe you need pick it up and feel it or taste it. Do some apples look like pears? It can be terribly confusing to understand how we do it, but it is something our brains are very good at.

In the past, training neural networks to learn about objects took a very long time. But things have become much better. Computers in large cloud networks can process lots and lots of data quickly and cheaply.


 

We use machine learning in Nibblity to collect food through the use of the VacYum, a handy tool that Mr. Chef gives you during the tutorial. The player looks at something in the world, holds down the button and it sucks up the tasty snacks into the players inventory.

To do this, the model looks through all the labels it’s able to identify and returns a confidence for each one. In this example, while looking at a fork, the top possible labels are:

  • Cutlery – 75%

  • Tie – 68%

  • Pattern – 64%

  • Jacket - 54%

  • Tableware – 50%

Why does it think it could be a tie, or a pattern, or a jacket? Maybe because the texture of the wood behind the fork? The computer doesn’t really know what it’s looking it within the scene, so it looks at everything all at once.

There are still some challenges with using machine learning. To keep the models fast and efficient we do have to make some trade offs. The models don’t always know what an object is, and sometimes it’s too dark, or too bright, or its overconfident with a label. However many of these elements are tunable as we further develop the feature, and as more player us it.


 
 

Sometimes it thinks my poor cat, Olimar, is a dog… but he will tell you with the most furious meow, he is assuredly NOT a dog! This makes sense for example if you point the camera at Olimar’s body and not his face. The body of a Cat and Dog can look relatively similar without capturing their defining features in the scan.

Running large machine learning models that can identify objects can also be slow. To get this running on a mobile device, and in real time, our model is focused on a couple hundred labels. All the processing is done locally on your device because it would be slow and expensive for every player to connect to a server in real time. We then take each label and put it into a smaller category of objects that we came up with that you can feed them back to your Nibblins.

We didn’t want to force players to find specific objects, for a number of reasons. Feeding and collecting should be fun and silly, not a chore. So, we focused on putting similar individual labels together. Take “clothing” for example; we can detect individual objects like: Shirts, Jackets, Gloves, Hats, Shoes, Jeans, Denim, Glasses and more. If we put them all in the same category it allows us to be more flexible in what we can confirm the player sees, and doesn’t send the player on a wild-goose chase. If you scan around with the VacYum, you’ll see the name of the label the model has found, and the icon of the Nibblin snack you will receive as a result.

Have you found something that you think should be in a different category? Or maybe we should add some new categories all together?! There are quite a few things we can see that we haven’t implemented into the game yet. We’re always looking to improve our model and how we can tweak the results, so let us know!

We had a lot of fun building this system, finding what objects were easy to identify and removing ones that weren’t.  Using machine learning is an awesome way to make the game feel more connected to the world around you and we love the opportunity to introduce something you might not have seen before into the game. There is so much more we want to do and it’s only going to get better as we get feedback from our players, continue to learn more, and the technology improves.

If you’d like to give it a try, you can find out how to grab the game at Nibblity.com. Also, if you’d like to follow our development or ask any questions, join our Discord Server and come say hi!

Next
Next

The Nibblins: Story Time