Welcome to the Fasthosts ProActive Podcast: Spill the IT. Each episode, we'll sit down with some of the amazing ProActive team and chat through their experiences of the ups and downs of IT infrastructure management in small businesses. There's always plenty to chat about.

This episode is all about data centre networks! Our Head of Data Centre Fabrics, George Daly talks us through our Tier IV data centre technology, what the massive migration between data centres entailed and what that means for SMBs both technically and commercially. Buckle up for a fascinating chat.

Listen on your favourite platform!
Want to listen on your go-to platform? We're on those too...

Episode transcript:

Intro (00:05):

Welcome to the Fasthosts ProActive Podcast, spill the IT. Each episode we'll sit down with some of the amazing ProActive team and chat through their experiences of the ups and downs of IT infrastructure management in small businesses. There's always plenty to chat about.

Charlotte (00:27):

Hi everyone. Welcome to this week's podcast episode where we're going to be exploring Fasthost's newest and most innovative data centre. And I have with me here, George Daley, head of the Data Centre Fabrics at Fasthost. And George, you are going to do a much better job than I were of introducing yourself. So if you wouldn't just mind giving everybody a sense of who you are and what you do?

George (00:50):

I'll try. Thanks. Yeah, so I'm George. I've been working for Fasthosts for over 20 years now, since 2003. And essentially I've been doing lots of different network related things in that time. So worked as a network engineer, ran a team of network engineers, moved into doing some network architecture and solutions architecture. And in my most recent incarnation, I am working on new sort of modern data centre network fabric designs, which is what we're running in the, what's the data centre right now.

Charlotte (01:23):

Excellent. And the data centre opened in November last year, so it's very brand new still. So what drove the investment in this new facility? Because obviously previously you were hosting down in Gloucester, so yeah. So what drove that change?

George (01:38):

Yeah, sure. So there's a number of drivers. I mean, most straightforwardly we were running short on space, so that's a good problem to have is a problem of growth. We were growing new products, new servers, and starting to basically reach physical capacity in the site. It expanded as far as we could. Also, in terms of power, there's only so much power that you can get in a city location like that. So yeah, I would say essentially space and power were the major factors there.

Charlotte (02:06):

Brilliant. And so tell us about, because obviously the data centre is really innovative and has all the latest technology and network capability, fibre optic. So tell us a little bit about that and how that benefits the customers of Fasthost?

George (02:22):

Yeah, for sure. So from a networking perspective, it's kind of an interesting challenge. Essentially, you have a well-established data centre with lots of different products installed in it. And the challenge is essentially to how do we connect that to a new site and how do we move those products and services with as minimal impact as possible to the customer? So from a fibre optic perspective, there's only a couple of ways really that you can connect sites together. Essentially you can use what's called dark fibre, where you get kind of a dedicated fibre strand between those two sites and you essentially own or lease that from a fibre provider and you can then add bandwidth to that as you need, or you can just lease individual wavelengths. So essentially taking a wavelength of, for example, a 100 gigs and using that. The way that we connected the sites was basically to take diverse dark fibre paths, so giving us maximum possible bandwidth.

(03:19):

So as we grow and as we start moving services between the data centres, we can scale up on demand essentially by adding new wavelengths, a teach end of the link from a availability perspective, which is one of the key things as an architect, when you are thinking about how you're going to connect these things together, we need to think about, okay, imagine a digger kind of chops through the ground and we end up losing one of those links, so we need to ensure that we've got diverse paths. So that goes all the way from where the fibre essentially leaves one site, making sure that there's no geographic shared location and there's no shared power in any of the sites.

(03:57):

Yeah, I mean essentially from a fibre optic perspective, that's kind of what it boils down to. Also thinking about the major elements of networking, also latency. So in other words, how long does it take to forward traffic between the two sites? And we need to think about that on both of those physical paths. So obviously the shorter the path speed of light applies in fibre optic cable, the shorter the path, the lower the latency, and the better performance that we can get for workloads that are having to traverse those data centre links.

Charlotte (04:28):

And is that sort of all those things you were talking about there, is that part of the redundancy? Because obviously it's a tier four data centre, which has a strong focus on redundancy. Is part of that related to that?

George (04:39):

No, absolutely. So I mean, there's redundancy in terms of how do we connect those sites together in order to be able to migrate it between data centres. And there's also redundancy in terms of how do we redundantly connect this new data centre towards the internet. So we also take diverse fibres from this Worcester data centre, and we run those to our WAN pops of presence, which are located at strategic locations in London with connectivity to the London Internet Exchange to be able to get us the shortest possible path through to the other networks, consumer networks and provider networks that the customer traffic might need to be rooted towards.

Charlotte (05:17):

Okay. So thinking about it, obviously you were moving everything from Gloucester to Worcester or maybe if not everything, a good deal of it. So that must have been a huge migration project. So what was involved in that?

George (05:33):

It was immense, and there was months slash years of planning that went into the mix of being able to deliver that. In Gloucester, we've got various different products. Some of them are relatively simple from a networking perspective, it's just simply giving a customer a bare metal server, and some of them are more complex and involve virtualization technologies, containerization and things like that. So in terms of how we can execute that migration, for us, the guiding principle was trying to minimise any impact on the customer, and that's also our internal system administration guide. So that means as we move a server from one site to the other me to make sure the IP address remains the same, which is an interesting challenge from a network perspective, typically you would use IP or layer three to separate data centres to make sure that a failure in one site can't be propagated to the other.

(06:27):

It's just a good practise. So in effect, from a networking technology perspective, that means we need to use network virtualization where we provide kind of an overlay. So on top of the IP links between data centres, we provide basically a tunnel. So that means when one server moves from one site to the other, it takes its IP address with it. And from the server perspective, it's entirely transparent. It doesn't know that it's in a new data centre, it thinks it's in the same VLAN, it has the same experience. From a network design perspective, there are some challenges in there. For example, you have a server in a VLAN, it has a default gateway, right? The first hop via which it routes to get towards other networks. And that is in a classic setup that would be in one particular data centre. Some new technologies that we've leveraged in moving RIT into Worcester.

(07:17):

It uses something called EVPN, VXLAN, getting into the nitty-gritty, which is essentially a method of being able to provide flexible layer two networks, so VLANs in old money, but to be able to deliver those across an underlying IP network. So you still get to use the robustness and the scalability of IP, but we get to also have the flexibility of layer two and VLANs. And in the new data centre, what that really means is that essentially our data centre engineers can instal any workload anywhere.

(07:50):

So a server can go to any rack in a data centre and it can be plugged into any VLAN. And in all the setups, there's an inherent problem with that. So from a network architecture perspective, a largely layer two domain. So being able to have VLANs extended across many racks in a data centre is inherently also a large failure domain, because layer two fails in various interesting ways. So yeah, to make a long story short, it's about trying to get as much flexibility for the data centre as possible. So service can be plugged in anywhere, but making sure that we do that in a way that is very robust and scalable.

Charlotte (08:28):

Interesting. So security must be a bit of a concern, I guess, while all that's going on. How did you manage that?

George (08:35):

So security is always a concern. Absolutely. Yeah. So that does get quite interesting and quite sort of technical. So essentially we use firewalls to be able to provide a secure separation between different network segments. That's kind of interesting when you're thinking about how do I provide firewalling for workloads that are in transition between data centres, because essentially the firewall needs to be in one site or the other? And we had some fairly interesting kind of conversations and designs that solve for that problem, essentially by taking a firewall cluster and deploying it with multiple nodes that split between the data centres. So the firewall control plane is actually also tunnelled in this EVPN VXLAN technology between the sites. So that means that we can deliver the firewall in both data centres and highly available in both data centres.

Charlotte (09:29):

Nice. Sounds complicated.

George (09:32):

Indeed. Complicated.

Charlotte (09:33):

And very thorough.

George (09:33):

Yeah, that's right. That's right. And the nature of these things is that as an architect, as a designer, when you're trying to build this stuff, it's tempting to make a super complex solution that solves for every possible failure mode. But the problem that you run into is that as you do that, you introduce more and more complexity. And in network design, I mean in any kind of design, complexity always has an inherent fragility to it. So the more complicated you make it, the more unforeseen failure modes there are. And then when things fail, they can be harder to fix and take longer to identify and fix. So there's always a tension there that we are working with to try and make sure we are giving all the features and the capabilities that are needed, but doing that in a way that is as simple as it can be.

Charlotte (10:18):

Yeah. To make it easier to maintain and...

George (10:20):

Easier to maintain, easier to troubleshoot, easier to fix and understand and explain to others. Yeah.

Charlotte (10:24):

Yeah. Brilliant. So after all of that, and obviously we talked about at the beginning about the reasons for making the move. So what improvements has it allowed you to provide as a company?

George (10:38):

Yeah, so I mean there's a lot in there. In the network domain, I would say we've moved from a world where we had multiple different kind of setups and essentially different network setups and designs for different products, and we try to bring that together and unify it. So there's a single network design, a single topology, a single set of technologies that are flexible enough they can deliver all of the requirements that different products have, whether that's kind of security bandwidth, et cetera. In the more broad sense, we're able to leverage a lot of, I don't know, I think Simon Young will talk in a future podcast.

(11:16):

We're able to leverage a lot of improvements around environmental aspects and sustainability. So the roof of the data centre is fully fitted with photovoltaic panels, and we're able to leverage a lot from that in terms of renewable energy. And in fact, the site itself is totally fed from green energy. So there's that aspect and also in terms of just the space that we have to grow into now. So I'm hoping that I won't have to be doing another data centre migration for many years to come. We have plenty of space, plenty of power and bandwidth, so we can really grow into this site now and focus on delivering new features and products that our customers need.

Charlotte (11:56):

Great. Well, it is an incredible site, I have to say. When we pulled up today, it was very impressive. So I'm looking forward to having a look around at some point. So you were talking about how you've combined products and really streamlined how you manage that in the data centre from where you were before. And I think that's really interesting because some of the criticism that's leveraged at the hyperscalers, so the AWSs of this world, is that because their core customers are really the large enterprises who obviously have very highly skilled IT teams and really understand all of this network infrastructure stuff, for want of a better word. And what I was picking up on there is the fact that you've changed how you manage these products. Is that an advantage for the smaller business in the sense that they're not going to have to have that really in-depth knowledge in order to use packages from?

George (12:56):

Yeah, no, I would definitely agree with that. I mean, I think where Faso's positions itself against maybe some of the larger US based hyperscalers, the likes of AWS is its targeting a slightly different segment. It's targeting more SMB, who they may have an guy on IT team or perhaps they don't at all. So it needs to be much more accessible. And that's definitely something that we try and build into the network design and network topology, is to make it accessible to our own developers and users, but also to try and simplify and make it accessible to our customers. And I'd say for sure, if you're comparing Faso's to some of the other hyperscalers, that's definitely a differentiating factor is where we are solving some of the same problems of delivering solutions at great scale, but we are very much customer focused and trying to make it accessible and understandable.

(13:49):

It's quite hard to, even as an IT professional, to keep track of, for example, the AWS technical landscape. And they're launching new services all the time, which is great, but some of those are pretty complex and advanced, and your typical small company or even if they have a small IT department is going to really struggle to be able to work with that.

Charlotte (14:09):

Yeah, I guess as well, it's translating that new product information into a commercial reason to use it, isn't it?

George (14:16):

Yeah, for sure. Absolutely. Yes.

Charlotte (14:19):

So yes, so you're absolutely right. Simon is going to be chatting to us at the next podcast. He's going to be talking about actually the tier four setup and then the sustainability aspect. So we're going to learn a lot more, dive into a bit more detail about those. But-

George (14:34):

Great. Looking forward to it.

Charlotte (14:35):

Thank you, George. It's nice to meet you.

George (14:36):

Likewise.

Charlotte (14:36):

And look forward to seeing you again soon.

George (14:36):

Okay.

Charlotte (14:36):

Thank you.

George (14:39):

Thanks. Cheers.

Outro (14:41):

Thank you for listening. We hope you enjoyed this episode. You can subscribe on Spotify or Apple Podcast or visit proactive.fasthost.co.uk for more info. See you next time.


Orlaith Palmer

Read more posts by this author.