Abstract: The distributed deployment of Large Language Models (LLM) on edge servers close to users has unlocked the service provider's potential to deliver low-latency inference. To obtain more ...