Talks
2025
- [2025.11] Merlin's Whisper: Enabling Efficient Reasoning in LLMs via Black-box Adversarial Prompting at Research Student Seminar @PolyU. [slides]
- [2025.11] TokenSkip: Controllable Chain-of-Thought Compression in LLMs at NLP Group @KCL. [slides] [video]
- [2025.06] Sharing Panel: Efficient Reasoning in Large Language Models at NICE and MLNLP. [video]
- [2025.05] Stop Overthinking: Towards Efficient Reasoning in Large Language Models at Theory Lab, Huawei Hong Kong Research Center. [slides]
- [2025.01] Speculative Decoding for Efficient LLM Inference at COLING 2025. [homepage] [slides] [video]
2024
- [2024.03] Unlocking the Efficiency of LLM Inference: A Comprehensive Survey of Speculative Decoding at NICE and CIP Group @CASIA. [slides] [video]