irq-1 1 days ago [-]
> “We’re heading into a new age of AI-assisted coding, and right now, it’s difficult to predict how that will play out. But if I had to place a bet, I would say that in the long run, AIs are more likely to generate high-quality code in a language like Gleam. Gleam makes it quick and easy for AIs to check their code, get instant feedback, and iterate. That should be an advantage compared to languages that are slow to build, have cryptic error messages, and can’t catch mistakes at build-time.”

Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.

cardanome 1 days ago [-]
I feel like now that LLMs are getting better the quality of the examples matters more than the quantity.

Garbage in, garbage out. If you confuse it with a lot of Junior-level code and have a languages that constantly changes best practices, the output might not be great.

On the other hand, if you have a languages that was carefully designed from the start and avoids making breaking changes, if it has great first party documentation and a unified code style everyone adheres to, the LLM will have an easier time.

The later also happens to be better for humans. Honestly the best bet is to make a good language for humans. Generative AI is still evolving rapidly so no point in designing the lang for current weaknesses.

mikepurvis 1 days ago [-]
If the main win of starting over with a new language is that you don't have a giant glut of legacy example code and documentation targeting no-longer-the-best-practice, maybe there's a solution where you take an established modern language like rust or go and feed the LLM a more curated set of material from which to learn.

Like instead of "the entire internet", here's a few hundred best-practice projects, some known up-to-date documentation/tutorials, and a whitelist of 3rd party modules that you're allowed to consider using.

pahbloo 8 hours ago [-]
I believe the best scenario is a language that gives an AI the best environment to train itself in a manner similar to the way a game like Go gave AlphaGo the opportunity to play innumerable times against itself and study the results.

I think the best programming languages of the future will come with their own LLMs, synthetically trained before release.

lanthissa 1 days ago [-]
It feels like it should be true that a referentially transparent type safe language would be the 'right' language for ai coding since each code block is stateless you should be able to in parellel decompose problems and test them infinitely down.
hinkley 1 days ago [-]
I can’t say where AI will end up but I firmly believe it will pick winners and losers in the next generations of programming languages. Not always for the better.

Any language that is difficult for an AI to understand will have to get popular by needing far less boilerplate code for AIs to write in the first place. We may finally start designing better APIs. Or lean into it and make much worse ones that necessitate AI. Look especially to an AI company to create a free razor and sell you the blades.

WJW 1 days ago [-]
If you have good enough (LSP+MCP) tooling, I'd expect that the LLM can learning quickly vs the LLM having lots of examples would converge towards being the same thing. At the very least it could generate many potential examples, put them all through the tooling to deterministic to get many "true" examples and then learn from that.
btschaegg 1 days ago [-]
> […] rather than things that have lots of examples.

Well, one glaring issue with the assumption of the quality of LLM output being mostly dependent on a large volume of examples online would be Sturgeon's law.

stephenlf 1 days ago [-]
Beautiful. I’ve taken a few cracks at learning Gleam, but I found I quickly get stuck in abstraction hell—building types on types in types without coding any behavior. I would probably have more success learning Erlang first, just to get a handle on those functional patterns the BEAM was built for. I should take another crack at it.
throwawaymaths 1 days ago [-]
Just FYI: unlike many pure FPs, building types on types is generally not a pattern that you do in erlang (or Elixir) and is largely considered an anti pattern in both communities.

You might not get the "handle" you're looking for?

GavinMcG 1 days ago [-]
For what it’s worth, I don’t think there’s much about Gleam’s design that is specific to “the functional patterns the BEAM was built for.” If you’re getting stuck in abstraction hell, consider asking the community for advice on what would be more idiomatic.
giacomocava 1 days ago [-]
Amazing to hear success stories of Gleam in production! Running on the beam really feels like a super power
0cf8612b2e1e 1 days ago [-]
For Gleam//Erlang is there an easy way to package up an executable you can distribute without also shipping Erlang?
totalperspectiv 1 days ago [-]
I can't speak to Gleam, but for Elixir I just used Burrito to create a single executable: https://github.com/burrito-elixir/burrito I think it works for just Erlang too.
toast0 1 days ago [-]
I haven't used it, but from the docs, I don't see why this wouldn't work for any language that compiles to beam files. You might need to adjust the build setup a bit.

Personally, I think I'd prefer something that worked without unpacking, but I don't actually need something like this, so my preferences aren't super important :D

jdiff 1 days ago [-]
Yes, I've created single-file Gleam executables by compiling to JavaScript and then using Node's experimental SEA (single executable application) feature. As a bonus, typically I've found the JavaScript targets to run a good deal faster for number-crunching tasks.
0cf8612b2e1e 1 days ago [-]
How big is a hello world executable in that case?
jdiff 1 days ago [-]
Hefty. The process is effectively just injecting all of the JS into the Node interpreter executable, so it's the size of the interpreter plus whatever you stuff inside. It's close to 50MB.
0cf8612b2e1e 1 days ago [-]
Oof, well that’s not ideal.
lpil 1 days ago [-]
No, the VM needs to be installed on the machine, similar to C#, Java, Python, etc.

There have been some projects for creating self-extracting executable archives for the VM, and some projects for compiling BEAM programs to native code, but nothing has become well established yet.

vips7L 1 days ago [-]
I think C# has been able to compile the vm + your DLL into a single binary that doesn’t extract for a while now. There’s currently ongoing work for Java to do this.
lpil 16 hours ago [-]
Yes, for sure. Both the JVM and the CLR have some native binary compilation options that, while less popular, are certainly suitable for production. The similar project for the BEAM unfortunately stalled and is no longer being developed.
WJW 1 days ago [-]
You can compile to javascript as well.
bmitc 22 hours ago [-]
What is the current status quo of using processes in Gleam? When I've looked, the language tour doesn't even get to anything about processes, messaging, or OTP. I found it here https://hexdocs.pm/gleam_otp/0.1.1/index.html by searching Google, but it seems like it's almost an afterthought.

I'm curious if I've had the wrong impression of Gleam. My assumption was that it was bringing static types to the BEAM's processes and OTP, but it seems like it's mainly a statically typed language that just happens to be on the BEAM and that it isn't necessarily looking to solve the "static type the messages" in Erlang and Elixir. Is that correct?

I'm not saying either way is bad or good. I'm just trying to get a sense of the language's origins and where it's going compared to Elixir and its gradual typing story. For example, if I know and like F#, Elixir, and Rust, what is the selling point of Gleam?

Ndymium 17 hours ago [-]
Note that you linked to the 0.1.1 version of the gleam_otp documentation. The latest version resides at https://hexdocs.pm/gleam_otp/index.html and both gleam_erlang and gleam_otp have hit 1.0.0 already. It doesn't contain every feature yet (like dynamic supervisors) but it's usable (and I've rolled my own dynamic supervisor in the meantime).
lpil 16 hours ago [-]
It's all production ready and in use by businesses and open source projects today!

The language tour covers _the language_ rather than the concurrency framework, so you'd look to the Erlang and Gleam OTP documentation to learn about the framework. Erlang and Elixir have a similar documentation split, for example, the most popular book teaching Erlang, LYSE, has the first half about the language and the second half about the framework.

> it seems like it's mainly a statically typed language that just happens to be on the BEAM and that it isn't necessarily looking to solve the "static type the messages" in Erlang and Elixir. Is that correct?

Not quite! Gleam does have a type-safe derivate of the OTP framework, while maintaining full compatibility with the original untyped OTP. When writing a Gleam application that runs on the Erlang VM you will be using typed actors and messages.

> I'm just trying to get a sense of the language's origins and where it's going compared to Elixir and its gradual typing story.

Elixir and Gleam's respective type systems are about as different as you can get while still being type systems. The difference is so large that they offer completely distinct advantages and development experiences, so people who especially value and enjoy one are unlikely to find the other to their liking. I believe this is a real strength! It means that the BEAM ecosystem offers a wider range of programming styles, and so it can attract and serve a wider range of developers, growing the ecosystem as a whole. BEAM languages work together closely.

> if I know and like F#, Elixir, and Rust, what is the selling point of Gleam?

I could give you a list of technical benefits to picking Gleam, but really I think this sort of technology choice is very personal. You may find Gleam to be a language that you enjoy and are productive with, and if that's the case then it's a great tool for you! Your use of Rust, F# suggests you'll appreciate the programming style, and your use of Elixir suggests you'll appreciate using the same BEAM runtime, so perhaps give it a try.

okkdev 18 hours ago [-]
The reason it's not in the language tour is because it's not part of the language itself. There's no async specific syntax or feature in the language itself. They all depend on the target since gleam can compile to erlang or javascript. If you compile to erlang you can use gleam_erlang and gleam_otp to leverage OTP. If you compile to javascript you use gleam_javascript and have to work with promises. It's definitely not just an afterthought and gleam_otp recently had a big 1.0 update.
bmitc 13 hours ago [-]
Processes are part of the languages Elixir and Erlang.
okkdev 6 hours ago [-]
They are primarily part of the BEAM which all these languages leverage. :)