Companies Home Search Profile
Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
Download pdf
Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications

Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications

Category

Author

Publication

OReilly Media

0 View
'
ISBN-10
1098107969
ISBN-13
978-1098107963
Publisher
OReilly Media
Price
31.4
File Type
PDF
Page No.
386

Review

"This is, simply, the very best book you can read about how to build, deploy, and scale machine learning models at a company for maximum impact. Chip is a masterful teacher, and the breadth and depth of her knowledge is unparalleled."

- Josh Wills, Software Engineer at WeaveGrid and former Director of Data Engineering, Slack

"
There is so much information one needs to know to be an effective machine learning engineer. It's hard to cut through the chaff to get the most relevant information, but Chip has done that admirably with this book. If you are serious about ML in production, and care about how to design and implement ML systems end to end, this book is essential."

- Laurence Moroney, AI and ML Lead, Google

"One of the best resources that focuses on the first principles behind designing ML systems for production. A must-read to navigate the ephemeral landscape of tooling and platform options."
 
- Goku Mohandas, Founder of Made With ML

"Chip's manual is the book we deserve and the one we need right now. In a blooming but chaotic ecosystem, this principled view on end-to-end ML is both your map and your compass: a must-read for practitioners inside and outside of Big Techespecially those working at 'reasonable scale.' This book will also appeal to data leaders looking for best practices on how to deploy, manage, and monitor systems in the wild."
 
- Jacopo Tagliabue, Director of AI, Coveo; Adj. Professor of MLSys, NYU

"Chip is truly a world-class expert on machine learning systems, as well as a brilliant writer. Both are evident in this book, which is a fantastic resource for anyone looking to learn about this topic."
 
- Andrey Kurenkov, PhD Candidate at the Stanford AI Lab

From the Author

Ever since the first machine learning course I taught at Stanford in 2017, many people have asked me for advice on how to deploy ML models at their organizations. These questions can be generic, such as "What model should I use?" "How often should I retrain my model?" "How can I detect data distribution shifts?" "How do I ensure that the features used during training are consistent with the features used during inference?"
 
These questions can also be specific, such as "I'm convinced that switching from batch prediction to online prediction will give our model a performance boost, but how do I convince my manager to let me do so?" or "I'm the most senior data scientist at my company and I've recently been tasked with setting up our first machine learning platform; where do I start?"
 
My short answer to all these questions is always: "It depends." My long answers often involve hours of discussion to understand where the questioner comes from, what they're actually trying to achieve, and the pros and cons of different approaches for their specific use case.
 
ML systems are both complex and unique. They are complex because they consist of many different components (ML algorithms, data, business logics, evaluation metrics, underlying infrastructure, etc.) and involve many different stakeholders (data scientists, ML engineers, business leaders, users, even society at large). ML systems are unique because they are data dependent, and data varies wildly from one use case to the next.
 
For example, two companies might be in the same domain (ecommerce) and have the same problem that they want ML to solve (recommender system), but their resulting ML systems can have different model architecture, use different sets of features, be evaluated on different metrics, and bring different returns on investment.
 
Many blog posts and tutorials on ML production focus on answering one specific question. While the focus helps get the point across, they can create the impression that it's possible to consider each of these questions in isolation. In reality, changes in one component will likely affect other components. Therefore, it's necessary to consider the system as a whole while attempting to make any design decision.
 
This book takes a holistic approach to ML systems. It takes into account different components of the system and the objectives of different stakeholders involved. The content in this book is illustrated using actual case studies, many of which I've personally worked on, backed by ample references, and reviewed by ML practitioners in both academia and industry. Sections that require in-depth knowledge of a certain topice.g., batch processing versus stream processing, infrastructure for storage and compute, and responsible AIare further reviewed by experts whose work focuses on that one topic. In other words, this book is an attempt to give nuanced answers to the questions mentioned above and more.
 
When I first wrote the lecture notes that laid the foundation for this book, I thought I wrote them for my students to prepare them for the demands of their future jobs as data scientists and ML engineers. However, I soon realized that I also learned tremendously through the process. The initial drafts I shared with early readers sparked many conversations that tested my assumptions, forced me to consider different perspectives, and introduced me to new problems and new approaches.

I hope that this learning process will continue for me now that the book is in your hand, as you have experiences and perspectives that are unique to you. Please feel free to share with me any feedback you might have for this book!

Similar Books

Other Authors' Books

Other Publishing Books

User Reviews
Rating
0
0
0
0
0
average 0
Total votes0