Artificial intelligence has moved a long way past the deadly robot networks from pop culture. In fact, AI controls many aspects of our lives, both online and offline. In its most recent form, it takes the shape of neural networks – specially trained systems that can ‘learn’ based on the inputs they receive.

But trying to create an intelligent system requires a huge amount of computing resources – which can drive up costs significantly.

What is a neural network?

Also known as neural nets, neural networks are a way of carrying out machine learning, in which a computer learns to perform a task using examples it’s given.

For example, a neural network that is being trained to recognise dog breeds will be shown thousands of images of different breeds that have been pre-labelled. The neural network will use what it’s given to try and recognise patterns in the images, so when it’s eventually shown an unlabelled image it can attempt to ‘guess’ the breed based on the patterns it’s deduced.

The reason they’re called neural networks is due to the fact that they are loosely modelled on the human brain. Essentially, the neural net will consist of thousands, if not millions, of simple nodes that are connected to each other – with these connections representing neurons.

Each time a neural network is given a new input, thousands of these nodes will be making decisions to come to the correct outcome. And to get a more accurate neural network, it should be given as many training inputs as possible. This is what causes such a strain on the hardware it’s using.

What can neural networks be used for?

In business, the aim of most neural networks is to essentially automate a task that would be menial or time-consuming for human staff to complete. These vary wildly in scope, but here are some examples:

  • Social networks automatically tagging friends’ faces in uploaded images
  • Video streaming sites suggesting videos to watch next based on your past watching habits
  • Self-driving cars using sensors to ‘see’ what’s happening around them
  • Businesses predicting customer behaviour
  • Voice recognition software on mobile and home assistants

Larger corporations rely on huge data centres to train their neural nets. But more recently, we’re seeing more and more hobbyist developers try their hand at machine learning.

How much computing power is needed to make a neural network?

While actually starting to develop a neural net is a big learning curve in itself, for many there’s a barrier that comes even before that step: having the necessary hardware that’s capable of running a deep learning program.

Overall, the resources you need will depend on the scale of your deep learning project – but to be effective, the resource requirements can get costly even if you’re just starting out. Neural networks require processing power in order to carry out their ‘learning’, and the more power they have, the faster they can complete each request. This can be the difference between training taking minutes or days.

On physical hardware, much of the strain is put on the GPU (graphics processing unit). In fact, even if you only want a computer for hobbyist purposes, a dual-GPU setup is recommended to get somewhat timely results. And the video cards used don’t come cheap, as they need to contain a larger amount of video RAM (VRAM) to be effective.

Processing power is another big part of creating an effective neural network. More specifically, maximising the amount of CPU cores running in parallel is what makes the many complicated calculations complete more quickly.

RAM is also key, as it allows for more training data to be stored at a time. 16GB of RAM is recommended as a minimum for a hobbyist machine, but should be increased wherever possible. However, as with all of these components, increasing the RAM significantly can run up huge costs.

Avoiding costs for a neural network

With these costs in mind, starting to develop a neural network may seem like something that’s out of budget. You also need to consider that even if you began with a setup which met all of the minimum requirements, you might find yourself hurting for more later down the line. If you want to ‘teach’ your neural network more quickly, or expand what it can do, the CPU and RAM requirements could suddenly double.

That’s why many hobbyists are moving to the cloud to host their neural networks. Using cloud resources, developers can set their virtual environment to use the CPU, RAM and storage they need. On Fasthosts Cloud Servers, you can also scale your resources up and down as you like. This allows you to scale up your project in the future should you need to.

A big benefit of using cloud servers, especially for hobbyists, is that there’s no upfront cost – just a monthly fee. So instead of spending a lump sum on a physical machine, you can get hold of a future-proof, remote rig, and get an accurate estimate on how much it will cost each month.

Check out our website to explore the Fasthosts CloudNX platform.