{"pk":57201,"title":"[Solution] IPA: Inference Pipeline Adaptation to achieve high accuracy and cost-efficiency","subtitle":null,"abstract":"Efficiently optimizing multi-model inference pipelines for fast, accurate, and cost-effective inference is a crucial challenge in machine learning production systems, given their tight end-to-end latency requirements. To simplify the exploration of the vast and intricate trade-off space of latency, accuracy, and cost in inference pipelines, providers frequently opt to consider one of them. However, the challenge lies in reconciling latency, accuracy, and cost trade-offs. To address this challenge and propose a solution to efficiently manage model variants in inference pipelines, we present IPA, an online deep learning Inference Pipeline Adaptation system that efficiently leverages model variants for each deep learning task. Model variants are different versions of pre-trained models for the same deep learning task with variations in resource requirements, latency, and accuracy. IPA dynamically configures batch size, replication, and model variants to optimize accuracy, minimize costs, and meet user-defined latency Service Level Agreements (SLAs) using Integer Programming. It supports multi-objective settings for achieving different trade-offs between accuracy and cost objectives while remaining adaptable to varying workloads and dynamic traffic patterns. Navigating a wider variety of configurations allows IPA to achieve better trade-offs between cost and accuracy objectives compared to existing methods. Extensive experiments in a Kubernetes implementation with five real-world inference pipelines demonstrate that IPA improves end-to-end accuracy by up to 21% with a minimal cost increase. The code and data for replications are available at https: //github.com/reconfigurable-ml-pipeline/ipa.","language":"en","license":{"name":"Creative Commons Attribution-NonCommercial  4.0","short_name":"CC BY-NC 4.0","text":"Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.\n\nNonCommercial — You may not use the material for commercial purposes.\n\nNo additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.","url":"https://creativecommons.org/licenses/by-nc/4.0"},"keywords":[{"word":"inference pipelines"}],"section":"Articles","is_remote":true,"remote_url":"https://escholarship.org/uc/item/2p0805dq","frozenauthors":[{"first_name":"Saeid","middle_name":"","last_name":"Ghafouri","name_suffix":"","institution":"University of South Carolina & Queen Mary University of London","department":""},{"first_name":"Kamran","middle_name":"","last_name":"Razavi","name_suffix":"","institution":"Technical University of Darmstadt","department":""},{"first_name":"Mehran","middle_name":"","last_name":"Salmani","name_suffix":"","institution":"Technical University of Ilmenau","department":""},{"first_name":"Alireza","middle_name":"","last_name":"Sanaee","name_suffix":"","institution":"Queen Mary University of London","department":""},{"first_name":"Tania","middle_name":"Lorido","last_name":"Botran","name_suffix":"","institution":"Roblox","department":""},{"first_name":"Lin","middle_name":"","last_name":"Wang","name_suffix":"","institution":"Paderborn University","department":""},{"first_name":"Joseph","middle_name":"","last_name":"Doyle","name_suffix":"","institution":"Queen Mary University of London","department":""},{"first_name":"Pooyan","middle_name":"","last_name":"Jamshidi","name_suffix":"","institution":"University of South Carolina","department":""}],"date_submitted":"2024-04-25T20:36:55Z","date_accepted":"2024-04-25T20:36:55Z","date_published":"2024-01-01T00:00:00Z","render_galley":null,"galleys":[{"label":"","type":"pdf","path":"https://journalpub.escholarship.org/jsys/article/57201/galley/43398/download/"}]}