SESSION + Live Q&A
BBC iPlayer: Architecting for TV
TV apps have seen an explosion in usage over the last few years as audiences start the slow migration away from traditional broadcast viewing. For iPlayer, TV has become the dominant platform, with over half of iPlayer consumption coming from the biggest screen in the house via thousands of models of smart TVs, streaming sticks, games consoles and set top boxes. Achieving universal reach — whilst also pushing the boundaries of experience — comes at a price however. In this talk we explore the challenges of TV application development; from our early days chasing new native experiences, to the development of our open source libraries and standards-based certification. We’ll also touch on the next steps for iPlayer as we blur the lines between broadcast and IP television.
What is your talk about?
For the last 10 years my life has been TV application development, and everyone asked me what is that? They assume it's all native development. But it's just JavaScript, they're just web sites. My talk is more about iPlayer, demystifying that, how we built the front-end, how we scale. It's our engineering journey as we've moved from 14 different codebases just for the iPlayer on TV, we've now got two, how that's worked, how standards have helped us.
What are the frameworks, the tools, the technology you are using?
Unfortunately until very recently TV browsers wouldn't typically manage React mainly for memory reasons but also because of some of the some of the JavaScript functions you'd expect don't exist in all these browsers. We've got a homegrown application framework called TAL which is open source. We just want one codebase. We want to make sure that works on as many devices as possible. Because we're a public service organization, we've got this challenge of universality where when we build an app or when we make things available so the audience it has to be available to the widest range of people possible which means we have to target all the devices on the market not just the ones we'd ideally target. That's a difference from other video on demand providers who can target these particular TVs because they're the easiest to work with, whereas we we've had to target everything. To be fair, TVs today are a lot better than they were 10 years ago so it is slightly different. TVs today are way better than they were 10 years ago, so the challenge isn't quite the same but still we need something that is really a responsive experience for people, that's personalized, that's going to run on every device that we can think of not just the best ones.
What about the server side?
We started off everything was client side. Then slowly evolved to being server side rendered yet still giving the impression that you've got this client side application. So really it's a hybrid app that pretends it's a local experience but it's actually all server side smoke and mirrors. That's part of our learning process for the last five years because the other thing that we did was we used to have five different apps so we had a news app, a sports app, and we merged it all into one codebase so effectively it all became one codebase, one application across thousands of different devices. It still looks like it's five different experiences. We can't hide the fact that is actually all server side. It still feels responsive and the interactions that old people expect.
What are some of the other areas you're talking about?
What I was going to talk a lot about was was some of our failures, particularly as we've scaled and become more personalized -we do have a big audience-, we get very peaky traffic. It's 8:00 at night, we get massive spikes as people start different programs part of the broadcast. We get very spiky traffic, it's very unpredictable. A lot of the work we've done is around making sure that the audience doesn't just see errors any time one of those spikes occurs. Because we do a lot of live stuff, they tend to be very high traffic but spiky. People do things like they just tune in for the last couple of minutes of a match and then take down a whole system.
Who is the main persona of the talk?
I suppose it's fellow engineers like me who perhaps don't know anything about how that whole ecosystem of TV apps works. It's really a bit about demystifying it but also there's so many things we can learn from each other. That's the point at these conferences. It's really just the fellow engineer who wondered how does that work.
Speaker
David Buckhurst
Engineering Manager @BBC
David Buckhurst is an engineering manager at the BBC, where he looks after the teams who develop interactive TV applications such as iPlayer and Red Button. David has a long history of working with complex device-based challenges. He has been a vocal advocate of automated testing for years,...
Read moreFind David Buckhurst at:
From the same track
Real World Examples of FaaS
Cloudflare launched Cloudflare Workers over a year ago bringing the ability to run JavaScript and then any WASM-targetting language on our 165+ locations around the world. Since then many companies have built functions and applications using Cloudflare Workers. This talk will look at real world...
John Graham-Cumming
CTO @Cloudflare
Airbnb’s Great Migration: Building Services at Scale
So you’ve decided to migrate from monolith to microservices, what next? Such a redesign to service-oriented architecture (SOA) is a long, arduous journey that benefits from an incremental, iterative approach. Yet, such a migration often must be done while still shipping new features,...
Jessica Tai
Software Engineer @Airbnb
What We Got Wrong: Lessons from the Birth of Microservices
Google deserves a lot of credit for imagining (and popularizing) what we now call "microservice architectures." That said, hindsight is 20/20, and many of the mistakes we made at Google are being recreated by the rest of the industry today. What did we get wrong about microservices at Google, and...
Ben Sigelman
Co-Founder @LightStepHQ & Co-Creator Dapper & @OpenTracing API Standard
Life of a Packet Through Istio
Istio is a service mesh for Kubernetes that offers advanced networking features. It provides intelligent routing, resiliency, and security features, so that service authors don’t have to keep re-implementing them. Istio is rapidly taking off and there are great introductory talks everywhere....
Matt Turner
Site Reliability Engineer @MarshallWace
Architectures Open Space
Shane Hastie
Director of Agile Learning Programs @ICAgile