GPT-3.5 and GPT-4 response times
Some of the LLM apps we've been experimenting with have been extremely slow, so we asked ourselves: what do GPT APIs' response times depend on?
Some of the LLM apps we've been experimenting with have been extremely slow, so we asked ourselves: what do GPT APIs' response times depend on?
Software 1.0 -- the non-AI, non-ML sort -- extensively uses testing to validate things work. These tests are basically hand-written rules and assertion. For example, a regular
GPT-4 supports images as an optional input, according to OpenAI's press release. As far as I can tell, only one company has access. Which makes you
Context: first thoughts on Pause Giant AI experiments. I will refine my thinking over time. * I had not thought about AI safety much since ~2017, after thinking a
Chain-of-thought reasoning is surprisingly powerful when combined with tools. It feels like a natural programming pattern of LLMs: thinking by writing. And it's easy to see
There is a very simple, standardized way of solving the problem of too small GPT context windows. This is what to do when the context window gets full:
I was building an agent with langchain today, and was very frustrated with the developer experience. It felt hard to use, hard to get things right, and overall
Microsoft recently held an event where they announced the "Office 365 Copilot". It was extremely impressive to me, to the extent that (when this launches) I
"What's the Moat of your AI company?" That seems to be top of mind for founders pitching their novel idea to VCs -- and
GPT-4 came out yesterday and overshadowed announcements, each of which would have been bombshell news otherwise: * Anthropic AI announcing their ChatGPT-like API -- likely the strongest competitor to