Much of the New Jersey approach is about getting away with less than is necessary to get the /complete/ job done. E.g...

Much of the New Jersey approach is about getting away with less than is necessary to get the /complete/ job done. E.g., perl, is all about doing as little as possible that can approximate the full solution, sort of the entertainment industry's special effects and make-believe works, which for all practical purposes /is/ the real thing. Regular expressions is a pretty good approximation to actually parsing the implicit language of the input, too, but the rub with all these 90% solutions is that you have /no/ idea when they return the wrong value because the approximation destroys any ability to determine correctness. Most of the time, however, the error is large enough to cause a crash of some sort, but there is no way to do transactions, either, so a crash usually causes a debugging and rescue session to recover the state prior to the crash. This is deemed acceptable in the New Jersey approach. The reason they think this also /should/ be acceptable is that they believe that getting it exactly right is more expensive than fixing things after crashes. Therefore, the whole language must be optimized for getting the first approximations run fast.

Attached: roka.jpg (1299x1079, 216.23K)

Other urls found in this thread:

docs.raku.org/language/grammars
myredditnudes.com/
twitter.com/SFWRedditVideos

Nakadashi

Regular expressions are only useful for searching text. Anyone using regexp for parsing is an absolute braindead retard.

>anime spam thread
hmmm...

>words too big for my tiny brain so it's spam
kys

friend of the OP?

>le regex bad meme
Anyone who complains about parsing with regex is a retard and probably a serial StackOverflow contributor.

If the language you're trying to recognize is in fact regular, regular expressions are the right tool to use, you'd have to be retarded NOT to use them.

Even if you have a regular language, writing a regular expression is wrong. This is because the regular expression that could parse the language will probably be non-trivial and unless you spend a lot of time on it it'll probably be wrong. See the email address regexp as a classic example.
If you have a language that can be described as a finite state machine, the correct solution is to parse the character stream. But of course unix weenies are completely satisfied with a 90% solution so retards keep using regexp anyways.

You can always validate the results afterwards.
See the python == operator as an example:
- First, it checks if the two objects implement the __hash__ magic method.
- If they do, check the hash.
- If the hashes match run the __eq__ magic method
- If that returns True. The objects are equal
The hash check does not guarantee the objects are not equal but it does assert they (can) be equal. Thus, filtering out "90%" of the non-equal objects without performing the costly "hard-equality" check with a 100% of fidelity. This is the key.
One can complain about the fact we would be doing more operations if most of the objects we compare are equal. This would be a valid reason to not to use "soft-equality" checks.

Attached: 1661905977089601.png (1000x1000, 1.49M)

Assertions can prevent errors but not if the method itself is fundamentally flawed. How do you check if a regexp matched a false positive?
You receive an approximation so all you can tell is if it's approximately right (i.e. sometimes wrong).

confirmed retard
>If you have a language that can be described as a finite state machine, the correct solution is to parse the character stream.
has no relation to the post you're quoting, bet you've never actually parsed anything useful in your life and jyst regurgitated buzzwords from your current CS course.

The data validation depends of your use-case.
If you are going to be shitposting about subjective shit, and repeating the same shit over and over don't bother replying. Hobbyists like you are fucking cancer.

Worse is better works for software because it is information, worse is better doesn't work for software because it is machinery.
To focus on how an "incomplete" machine doesn't satisfy it's design criteria ignores the fact that, as you have stated, it will do tomorrow in a way that in other mechanical systems could take decades to achieve.
Remember to pay your wizards if you can spare a donation to open source.

>no argument
Are you a web monkey?

What does this have to do with NJ?

>New Jersey approach
wtf is that?

this post makes absolutely no sense. looks like a bot

Well lucky (You), Raku thought likewise, and they introduced literal lexical grammar and tokens as part of the language itself
docs.raku.org/language/grammars
grammar Calculator {
token TOP { [ | ] }
rule add { '+' }
rule sub { '-' }
token num { \d+ }
}

class Calculations {
method TOP ($/) { make $ ?? $.made !! $.made; }
method add ($/) { make [+] $; }
method sub ($/) { make [-] $; }
}

say Calculator.parse('2 + 3', actions => Calculations).made;

# OUTPUT: «5␤»

Also the thing is, you can always just use a set of regexes and if the input doesn't match any of them 100% then just throw an error.

Attached: 1649152073199.jpg (1024x1023, 154.3K)

>i'm pretending to be a retard on the internet
>give me attention please
here you go

Attached: 1661985974735086.png (1000x1000, 172.14K)

"New Jersey" approach is just being a retard and not knowing how to do anything. It's like you take your car to get your windshield fixed and he just puts duct tape on it. Everything "New Jersey" gives you the feeling "you could have done something better yourself" which is why they always reinvent wheels. It's not a "philosophy" it's just what you get when you have idiots who don't care about anything.

slow as fuck
not suited for real (big) grammars because there is no cycle/infinite recursion check which makes it unusable

stop larping neophyte

raku is the most based language I'll never learn