rafram 22 hours ago [-]
You may get a letter from Apple’s lawyers because of the name - Swift and SwiftUI are trademarks, and this seems like something they’d want to keep for themselves.
leobg 13 hours ago [-]
Or from Taylor’s.
andsoitis 20 hours ago [-]
> Swift and SwiftUI are trademarks

This is called SwiftAI, though.

rafram 17 hours ago [-]
Right, and if it were called IBM AI, they’d get a letter from IBM instead. You can’t just tack something onto the end of a trademarked brand name.
reactordev 20 hours ago [-]
If they can prove enough similarities or overlap with their brand, they’ll find the way. And since it targets macOS/iOS specifically, there you go.
steve1977 16 hours ago [-]
Als considering that Apple probably has Apple Intelligence trademarked
deanputney 24 hours ago [-]
Awesome, this is a good idea! Having a nice wrapper to make LLM calls easier is very helpful too :)

Nice to see someone digging in on the system models. That's on my list to play with, but I haven't seen much new info on them or how they perform yet.

mi12-root 23 hours ago [-]
We’ve begun internally evaluating the model and will share our findings more in details later. So far, we’ve found that it performs well on tasks such as summarization, writing, and data extraction, and shows particular strength in areas like history and marketing. However, it struggles with STEM topics (e.g., math and physics), often fails to follow long or complex instructions, and sometimes avoids answering certain queries. If you want us to evaluate a certain use case or vertical, please share it with us!
jc4p 23 hours ago [-]
I do a lot of AI work and right now the story for doing LLMs on iOS is very painful (but doing Whisper or etc is pretty nice) so this is existing and the API looks Swift native and great, I can't wait to use it!

Question/feature request: Is it possible to bring my own CoreML models over and use them? I honestly end up bundling llama.cpp and doing gguf right now because I can't figure out the setup for using CoreML models, would love for all of that to be abstracted away for me :)

mi12-root 23 hours ago [-]
That’s a good suggestion, and it indeed sounds like something we’d want to support. Could you help us better understand your use case? For example, where do you usually get the models (e.g., Hugging Face)? Do you fine-tune them? Do you mostly care about LLMs (since you only mentioned llama.cpp)?
jc4p 21 hours ago [-]
Thank you! I’ve been fine tuning tiny Llama and Gemma models using transformers then exporting from the safetensors that spits out — My main use case is LLMs but I’ve also tried getting YOLO finetuned and other PyTorch models running and ran into similar problems, just seemed very confusing to figure out how to properly use the phone for this.
mi12-root 6 hours ago [-]
Thanks for sharing the details—that makes a lot of sense. Fine-tuning and exporting models on-device can be tedious nowadays. We’re planning to look into supporting popular on-device LLM models more directly, so deployment feels much easier. We'll let you know here or reach out to you once we have something
hbcondo714 20 hours ago [-]
Mind me asking what your thoughts are on the overall quality of Apple's on-device LLMs? I've found that LanguageModelSession always returns very lengthy responses:

https://developer.apple.com/forums/thread/789182?answerId=85...

mi12-root 6 hours ago [-]
I tested the system LLM with a long article using two prompts: one asking for a summary in at most 20 words, and another asking for a one-sentence summary. In both cases, the model followed the instructions correctly. Regarding your second point in the link above: maximumResponseTokens: 500 corresponds to roughly 1,500–2,000 characters in English. For the AFM tokenizer, a token typically represents 3–4 characters. Could it be why you are getting large outputs? If you share your prompt(s), we’d be happy to take a closer look. You can reach us on Slack, Discord, or privately at root@mi12.dev
keyle 23 hours ago [-]
Needs more example on custom.
mi12-root 23 hours ago [-]
Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration? If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.
jdmg94 8 hours ago [-]
There is no examples or documentation on `CustomLLM` the README file has examples on `SystemLLM` and `OpenaiLLM` but there's no way for us to know if we need to bring in guff files, ollama, hugginface, etc.
lawgimenez 21 hours ago [-]
Another vibe coded.
mi12-root 6 hours ago [-]
While we did use AI coding tools, we put significant thought into the design of SwiftAI and would greatly value feedback on it (see system design doc above). All AI-generated code was carefully reviewed and often, rewritten (no yolo)