Why was external AI chosen over an internal system?

On use of external resources, you can run your LLM locally if you like?:

But have you done this for a project?

It requires you to own or lease particularly impressive hardware!

Try the smaller language models (that you might consider hosting) yourself and see how impressed you are:

Your mileage may vary but imho you’d need to be looking at hosting an at least 70Bn parameter model which is going to be quite costly to self-host.

For reference, GPT 3.5 is supposed to be a 175Bn parameter model and GPT 4 has nearly 2 Trillion (they say) :sweat_smile:

I wrote this plugin:

And it has an AI tagging feature. In my experience you need GPT 4 Turbo to make it work well (and it really works well then!)

If you intended to self host something as powerful as those you’d need very deep pockets.

This is why the use of an external LLM API is still an attractive, pay-as-you-go, option, especially because you are only paying for calls you make, not an expensive infrastructure that spends any time spinning its wheels unused.

Of course if privacy is of major and sufficient concern, this might change the maths.

5 Likes