Ease of type creation = quality of language

Consider the following:
>The compiler should catch logic errors
>The compiler can only detect type errors
>Logic errors should become type errors
>A good programming language should make it easy to represent logical assertions using types and very hard to circumvent the type system
So basically, if you want code to be maintainable, your programming language should meet all of the following criteria:
>Type system is expressive (supports constraints via interfaces, typeclasses, etc)
>Type system is hard to circumvent (typing must be strong and static)
>Types are easy to create (records and typed unions must be available, GADTs are optional)

Attached: coffee-cup-1748015276.jpg (1024x768, 84.27K)

Other urls found in this thread:

idris-lang.org/
youtube.com/watch?v=X36ye-1x_HQ
github.com/ghc/ghc/blob/master/libraries/ghc-prim/GHC/Tuple.hs#L90
twitter.com/NSFWRedditVideo

Some of what you say sounds interesting, but it's too abstract and general to know for sure what you mean.
Maybe add some examples.

I should be able to speak into a microphone and tell my compiler what I want it to do and it just does it

Computer! Build me a cloud for my website that hosts GAAS apps and run it on the mac mini

simple fucking shit why is it taking so long to invent this

The interesting parts of programs are the runtime state, can only make very boring toy programs that execute enterily at compile time.

You're describing dependant tying.
Those languages are essentially just proof checkers that use Curry-Howard to map type correctness onto logical correctness.
You don't actually want to use them for everyday stuff, most aren't even turing complete because of termination checks.

>simple fucking shit why is it taking so long to invent this
Honestly? It's the reptiles.
It has existed forever but the secret societies don't want this to get into the public, because it would take away from their power to control the masses.
They kind of hinted at this in the most recent C++17 standard, if you read it closely.

there's more to programming than defining types.
SML for example has a type system that makes it very convenient to create types, yet it still often sucks to write algorithms on those types.

Kotlin

So Haskell then, this language is perfect.

Yes yes yes
Classic FP brainwashing
It started like this
>logic is good bro
>logic and types are the same bro
>just use types bro
Well fuck this
Types are only really a good fit for FP anyway
I like them mind you, but I don't know. It hurts my soul to forget everything else and use this for everything
If types and logic are the same, and then why the FUCK don't we use logic?

I'm thinking everyday on how to incorporate logic in imperative or functional language. I don't know how it should be done, but I feel there is something good to be done.
One thing that I know for sure is that, wether imperative or functional, the optimizing compiler is choke full of constraint satisfaction. i.e. logic

>most aren't even turing complete because of termination checks
None of them are Turing complete but usually you can describe infinite computations in some way and then extract a program in a Turing complete language that does the thing you want.

idris-lang.org/
youtube.com/watch?v=X36ye-1x_HQ demo

was cool that the computer wrote code for him

Attached: firefox_HvXJCZ4D1t.png (614x541, 260.03K)

>If types and logic are the same, and then why the FUCK don't we use logic?
Using types means you're using logic, that's what saying they're the same means.

I know that. That's not what I meant.
Using types is a restricted and/or constrainted form of logic.
Compared to prolog for example.
For one thing, prolog predicates can be made in a way that they works both ways. Mapping values input to output or output to input.
Also, if logic is so good, why not using it as a model of computation, instead of reducing it to only a type checker and for pattern matching?
As I said, I don't know what should be done exactly, but logic has a little more horsepower than a simple type system.

Absolutely correct.

Attached: thinkpad-haskell-chad.png (758x644, 255.66K)

>Type system is expressive
This is the most important part.
Otherwise you end up with shit like this github.com/ghc/ghc/blob/master/libraries/ghc-prim/GHC/Tuple.hs#L90

It's okay to say that Haskell tuples are shit because they are but saying that Haskell doesn't have an expressive type system is just disingenuous.

>types are for FP
Rust has a great type system and isn’t an FP lang
Scheme is an FP lang and has an abhorrent type system (weak + dynamic typing)

When I say types, I mean Hindley-Milner type system. So scheme doesn't count.
And I don't say that a language isn't FP if it isn't typed, but that typed are mostly for FP languages.
>Rust has a great type system and isn’t an FP lang
This kind of statement is utterly meaningless.
First don't be autistic. Paradigms are meaningless, only models of computations have meaning. Because no language implement exclusively one model of computation. Imagine programing in bare lambda calcul (enjoy not having I/O) or in bare state machines (enjoy compiling by hand every expressions into state transitions).
Hindley-Milner type systems only make sense for describing the type of arguments and return values of functions. And that's exactly what Rust type does isn't it?
You can't remove functions from Rust type system. And anyway, Rust's type system is BASED on ML languages' type system. Which make your statement even more stupid and absolutely doesn't disprove what I said.
And hello, types originally came from typed lambda calculus.

I meant to say somewhere that FP is for lambda calculus. Or functional features of a languages which have lambda calculus as semantics.

Your claim is that types are for FP, but non-functional languages can take advantage of type systems as well. For example, you can enjoy the benefits of Rust's type system without writing Rust in a functional style. The fact that the type system is heavily inspired by ML doesn't change anything

you can't sell retards on type tetris because they think high level languages are supposed to resemble machine code output