Wo bekomme ich eine DSGVO-konforme API als Alternative zu OpenAI?
Es darf gerne etwas teurer sein als die API-Verwendung von OpenAI, sollte allerdings nicht das 50-fache sein.
Es darf gerne etwas teurer sein als die API-Verwendung von OpenAI, sollte allerdings nicht das 50-fache sein.
Ich finde die Seite super! Wie google 2.0. Perfekt wenn man eine Frage hat die man nicht perfekt in Worten die der Google Algorithmus versteht benennen kann. Gerade erst wieder ist mir der Name des Atlaswirbels nicht eingefallen also einfach mal schnell gefragt “hey wie heißt der Knochen den man im Nacken am oberen Ende…
Ich habe nichts gemacht und aus dem heiteren himmel geht der sound von windows immer auf 0. Ich hatte ihn dann wider hoch gestellt, war dann aber wieder instant auch null. Ich hatte auch noch so 10 min geartet. Was kann ich nocht tun?
Seit gestern werden bei mir die instagram reels so angezeigt, dass die Schrift darunter nur zur Hälfte lesbar ist, ich habe schon probiert die app neu zu installieren es hat nicht geklappt, habe ein Honor 90 Lite!? Bitte um Hilfe
Hallo, Ich habe schon vor 1 Jahr HTML und CSS gelernt, jetzt lerne ich Javascript, wollte ursprünglich als fullstack Developer später arbeiten, jetzt möchte ich aber in der Cyber Security arbeiten, sollte ich noch Javascript zu Ende lernen und mich dann mit Cyber Security beschäftigen, oder sollte ich direkt aufhören und mich mit Cyber Security…
Moin, ich würde gerne die Leistung meiner Radtouren in Watt darstellen. Mir ist bewusst, dass mir einige Faktoren fehlen (Luftwiderstand, Rollwiderstand etc. ). ChatGPT hat mir dafür diesen Lösungsweg gezeigt: Energie=Masse×Erdbeschleunigung×Höhenunterschied Zeit=Geschwindigkeit/StreckeLeistung=Zeit/Energie Mit folgenden Daten habe ich das ganze mal durchgerechnet:100 km13,27 km/h3610 Höhenmeter100 Kilo Gewicht Dabei werden mir ca. 470 Watt ausgespuckt. Das wäre…
Normalerweise muss man ja, wenn man Bilder aus dem Internet für Youtube, Instagram, etc. Verwenden will, erst fragen (außer sie sind Creative Common). Aber ist das bei Bildern von Pinterest auch so? Ich sehe nämlich oft Videos, wo Leute Pinterest Bildet verwenden, und ich glaube nicht, dass alle von denen gefragt haben.
If it is to be GDPR compliant and if you really want to be sure that no data can flow to third parties, it would be most sensible to host things on your own server. However, it also depends on the intended use whether the operation or Renting your own server is worthwhile for you at all.
There are also providers such as Aleph Alpha or Spherex. It would be useful to learn what such an API should be used for. Are there any services, a product of your own or a completely different use? Last but not least, I still remember HuggingFace, but I am not absolutely sure.
“It would be useful to know what such an API should be used for.”
I’d like to make my email responses pre-form. With the OpenAI API key, this works well in most cases. However, I do not think that this corresponds to the rules of the GDPR.
Another application is the spell revision at the touch of a button.
To do this, I have adapted this script to my needs: https://github.com/ecornell/ai-tools-ahk
Thank you for your answers. Does the context help?
The context continues to help and in your case, other solutions come into question. I don’t know what it looks like about your hardware and computing power, but I would look more towards local LLMs and that.
I have run Ollama with OpenWebUI via Docker and can work a lot on deposited prompts. If you want to go further, for example, with n8n for AI automation would be much more possible.
The good of all the technologies mentioned is that your data is not passed on to third parties. Everything runs locally on your computer, on a home server or alternatively on a separate server that you rent outside.
Even if you only use Ollama so that “out-of-the-box” thanks to API can access it. For example, I have integrated it into Obsidian and can quickly, easily and comfortably search my entire notes.
Using shortcuts, I can create a summary, fill an email with content or work with other data. Even as a code assistant, everything runs locally without leaving my network.
I can imagine that a separate server will open new Pandora’s books in terms of legal requirements. Or am I totally wrong?
Not a thing, like it.
Who doesn’t know, this constant whining. ; I wish you a lot of success in negotiating to make your employer tasteful. He can even save money if the cost of the subscription would fall.
Anyway, there’s a lot of success. You can give a feedback later on if and how it was solved. For further questions, just check again.
Thank you very much for your solutions. Let’s see how I raise this. We in the company can’t cost anything. I look at this, my next employer may offer me 16 GB;)
There are smaller LLMs such as phi, which are relatively compact and optimized so that they can also be used on weaker hardware. However, I would plan at least 16 GB of RAM so that everything can run smoothly.
What might be an option for you would be buying a somewhat newer think client or the like. Alternatively, at least extend the RAM to 16 GB or more and then test itself which LLMs run acceptable on it.
Otherwise, hold a (V) server for ~50 € a month and then use the LLMs with significantly more performance. About OpenWebUI can also work with profiles, as you know from other web services.
The data still remains with you. The essential difference is that it does not run locally but on your/eur server. Everything else remains as if, except that it is running through an external server.
Thank you. We have quite old computers at work. Do you know local LLMs for which this would not be a problem? (It is an old i5, 8 GB RAM, no graphics card.)
As far as I know, OpenAI says at least that they do not use data from paid users for training. But would not put my hand in the fire, even if they said it.
Thanks to EU regulations such as the AI Act, European competitors on OpenAI are virtually no and will not exist in the foreseeable future. If you process personal data such as e-mails, you’re not realistic about running an LLM on your own infrastructure.
However, for processing and answering emails, Phi could be from Microsoft or comparable small LLMs (~2.7 billion parameters) to be small. I don’t think there’s something that makes sense, especially if you use GPT-4o or GPT-4 as a comparison. If you have the output quality of GPT-4o miniature Llama 3.1 70b of Meta would be very similar in terms of quality, but open-source and thus on own hardware (e.g. with Ollama). There is no old office PC enough. Meta recommends at least: an 8-core CPU, 32 GB RAM and a GPU at quality level of at least the Nvidia 3000 series. Minorly smaller systems also go, the answers will only be waiting for quite a long time.
There are some vendors who host the big open source models for you and which you can access by API with usage-based billing (just like OpenAI), for example Replicate. You don’t have to get your own hardware. Unfortunately, I do not have a European provider.
Thank you for your answer. GPT-4o mini is perfect for my purposes. Is Phi really so much worse?
Phi 3.5 mini achieves a value in the MMLU Benchmark (a fairly broad benchmark that measures LLM’s world knowledge and language understanding) 55.4% (of max. 100%). GPT-4o mini a value of 82%. That’s quite an extreme difference. The quality of a language model correlates quite strongly with the number of parameters. The more parameters, the better, but the higher the system requirements. Phi 3.5 mini has 3.8 billion Parameters (I originally went out from 2.7 in my answer, but still has a little more), with GPT-4o mini the number is not known, but it is definitely far more. Llama 3.1 70b (70 billion) Parameters) achieves a value of 86%, is thus objectively even better than GPT-4o mini.
The measuring methods differ here and there (few-shot / multi-shot / CoT)So take everything with a pinch of salt. But in principle, this gives a good overview of how these models intersect with each other.
A compromise could possibly still be Llama 3.1 8b (8 billion parameters) is a value of 73% achieved. However, as I said, at least one 3000 GPU should be installed.
In principle, however, you have to try out what makes sense for your application. It may also be that a small language model like Llama 3.1 8b or even smaller than Phi mini will work well for your use case. Thanks to Ollama, it can be tested locally relatively easily, so long as its own machine has enough juice.
Here you have forgotten Llama 3.2 with 1b and 3b, which also cut off relatively well and also run relatively well on weaker hardware. There you can also use 16 GB of RAM and without dedicated GPU.
Alternatively, Llama 3.1 8b instead of the 70b and you might have to look at HuggingFace as it looks at the data. And instead of the 32 GB RAM and dedicated GPU, a MacMini M1 and a newer one would be an option.
Where I had already run the mentioned models on a ThinkClient test. Do not object to you and optimally more performance would be good. It still works in the smaller frame even without 8 cores, GPU etc.
A dedicated server with a 14 core and 64 GB of RAM also costs “only” by 50 € per month. If a dozen employees don’t access it and it’s enough for you, you could help yourself and host everything yourself.
LG medmonk
I honestly do not believe that 3b- or even 1b-models (which are actually mainly used for edge/on-device applications) are sufficient for the application described by him (e-mail addresses formulate). Especially when it comes to the fact that the model is still to splash out an output in a structured format with a correspondingly long input.
I tested this with Llama 3.1 8b Instruct q8_0 and an average long email. Despite the request, only the answer to the e-mail in Plaintext started with “Certainly! […] Here’s an appropriate answer to the provided e-mail:”. Depends on how the answer is processed further at the end. Llama 3.1 70b Instruct had no problem with it.
Perhaps this would be possible with fine tuning, then these very small models would be possible. worth considering.
But the Apple Silicion devices are definitely a good recommendation – we use machines instead of dedicated GPUs.
A big thank you to both of you! I mainly need the AI to automatically respond to emails and to perform a spelling adjustment via a shortcut. I can imagine that one of the less powerful models could be enough for it. Unfortunately, we only have scrap calculators at work where there is not much possible.