Making Long Context LLMs Usable with Context Caching