SESSION + Live Q&A
Effective Data Pipelines: Data Mngmt from Chaos
Creating automated, efficient and accurate data pipelines out of the (often) noisy, disparate and busy data flows used by today's enterprises is a difficult task. Data science teams and engineering teams may be asked to work together to create a management platform (or install one) that helps funnel these streams into the company's so-called data lake. But how are these pipelines managed? Who is in charge of maintaining services and reducing costs? How do we ensure data is not lost, not duplicated and is factually accurate? These concerns, among others, will be discussed alongside implementation decisions for those looking for a practical recommendation on the what and how of data automation workflows.
Speaker
Katharine Jarmul
Python engineer, Founder @kjamistan
Katharine Jarmul is a Python engineer and educator based in Berlin, Germany. She runs a data science consulting company, Kjamistan, and offers several private and public courses on data automation, cleaning and acquisition. She has worked on data extraction and analysis since 2008. She offers...
Read moreFind Katharine Jarmul at:
From the same track
Data Cleansing and Understanding Best Practices
Any data scientist who works with real data will tell you that the hardest part of any data science task is the data preparation. Everything from cleaning dirty data to understanding where your data is missing and how your data is shaped, the care and feeding of your data is a prime task for the...
Casey Stella
Committer and PMC member on the Apache Metron project
Reliable & Scalable Data Infra Eco-System At Uber
Uber's vision is to make transportation as reliable as running water everywhere, for everyone. Data is key for Uber's 24x7 global business operations and making data available for different use cases across the company in a reliable, scalable and performant way is often challenging. In this...
Sudhir Mallem
Staff Engineer @Uber
Building a Data Science Capability From Scratch
This talk will cover the challenges, both technical and cultural, of building a data science team and capability in a large, global company. It will discuss best practices, lessons learned, and rewards of leveraging data effectively in the next frontier of data science: commercial insurance.
Victor Hu
Head of Data Science @QBE
Data Engineering Open Space
Building Data Pipelines in Python
This talk discusses the process of building data pipelines, e.g. extraction, cleaning, integration, pre-processing of data, in general all the steps that are necessary to prepare your data for your data-driven product. In particular, the focus is on data plumbing and on the practice of going from...
Marco Bonzanini
Data Scientist & Co-Organiser of PyData London Meetup