Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Have you read your PFDS today?

Name: Anonymous 2015-11-13 21:39

Purely Functional Data Structures
http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf
When a C programmer needs an efficient data structure for a particular problem, he or she can often simply look one up in any of a number of good textbooks or handbooks. Unfortunately, programmers in functional languages such as Standard ML or Haskell do not have this luxury. Although some data structures designed for imperative languages such as C can be quite easily adapted to a functional setting, most cannot, usually because they depend in crucial ways on assignments, which are disallowed, or at least discouraged, in functional languages. To address this imbalance, we describe several techniques for designing functional data structures, and numerous original data structures based on these techniques, including multiple variations of lists, queues, double-ended queues, and heaps, many supporting more exotic features such as random access or efficient catenation.

In addition, we expose the fundamental role of lazy evaluation in amortized functional data structures. Traditional methods of amortization break down when old versions of a data structure, not just the most recent, are available for further processing. This property is known as persistence, and is taken for granted in functional languages. On the surface, persistence and amortization appear to be incompatible, but we show how lazy evaluation can be used to resolve this conflict, yielding amortized data structures that are efficient even when used persistently. Turning this relationship between lazy evaluation and amortization around, the notion of amortization also provides the first practical techniques for analyzing the time requirements of non-trivial lazy programs.
 
Finally, our data structures offer numerous hints to programming language designers, illustrating the utility of combining strict and lazy evaluation in a single language, and providing non-trivial examples using polymorphic recursion and higher-order, recursive modules.

Name: Anonymous 2015-11-13 21:56

uh no im not really into pokemon

Name: Anonymous 2015-11-14 7:21

Name: Anonymous 2015-11-15 12:30

Still have to pay that logarithmic penalty which can mean double or triple slowdown.

Name: Anonymous 2015-11-15 13:57

the logarithmic slowdown is not a "penalty", it's a crown of thorns which keeps one vigilant. Notice how mutists are always slovenly and unkempt?

Name: Anonymous 2015-11-15 22:46

>>4
Still faster than maintaining undo/redo lists or serializing snapshots, as imperative coders try to do.

Name: Anonymous 2015-11-16 5:31

>>6
Except they almost never need to do that, and when they do need to do things like that, they can just adopt functional style within their multiparadigm language of choice.

Name: Anonymous 2015-11-16 19:27

>>7
Sure, if they want to reimplement everything that uses the update-in-place data structure that is at the core of their program (if it wasn't core, they wouldn't need undo/redo).

Alternatively they try to make their program multithreaded and end up with a blob of shit resembling an update-in-place data structure heavily guarded by mutex locks.

IHBT

Name: Anonymous 2015-11-17 0:12

>>8
One can allocate new nodes for a data structure in any language.

Name: Anonymous 2015-11-17 3:04

>>7
They also have to rewrite the memory management system if they want to hope to reach the performance of a FP solution. C isn't built for this.

Name: Anonymous 2015-11-17 18:29

Write me a purely functional, recursively defined, lazily evaluated red black tree, /prog/.

Name: Anonymous 2015-11-17 19:29

>>9
Everything using the data structure, dumbass.
Every insert(tree, k, v) now must become tree = insert(tree, k, v) (along with all the manual memory management involved, if applicable) and then the same code transformations must happen at each and every one of those bits of code to avoid exactly the same problems at a slightly different level.

Name: Anonymous 2015-11-18 19:49

>>10
hope to reach the performance of a FP solution

Ahahahaha.

Name: Anonymous 2015-11-18 23:58

>>13
In my day, people used to put sleep in their code for that.

Name: Anonymous 2015-11-19 5:56

>>13
I will only say to attempt it and see for yourself. C easily turns to molasses once you start pushing complexity, unless you rewrite everything from scratch that you'd get for free in other languages, including memory management.

Name: Anonymous 2015-11-19 6:01

>>15
But I like programming in portable assembly code! Only lamerz don't invent here!

Name: Anonymous 2015-11-19 8:24

>>15
See encourages the programmer to stay within reasonable constraints for performance. Haskell encourages the programmer to avoid thinking about performance, and forces a model that incurs overhead, while the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.

If you want a thread safe malloc you can write your own. You can use pure functional data structures in C by just allocating things and not modifying them. And in parts of the program where a different model is more useful, you can do that instead. It's not a big deal to go between the two. They can, and should coexist.

Name: Anonymous 2015-11-19 9:00

>>17
malloc/free is dog slow for functional use compared to garbage collectors. But again, your response as a C programmer is "Write everything you need from scratch in low-level" instead of "Reuse tools that are appropriate and high-performance for your purpose".

Sorry, but I'm an adult who programs to get things done, not a kid who's just tinkering around for its own sake. These are solved problems.

Name: Anonymous 2015-11-19 19:42

>>17
the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.
99% of the time the overhead doesn't matter, because on a computer capable of running an OS, you don't need to care. (Embedded applications, by your logic, will do better to be entirely in ASM rather than C, because of performance.) If the overhead does become a problem, you can tune the fuck out of it[1].

If you want a thread safe malloc you can write your own.
Sure, just bang one out over the weekend. It's not like there are many malloc implementations already.

You can use pure functional data structures in C by just allocating things and not modifying them.
That's immutable, not functional. And while you're doing that, you can enjoy writing your own garbage collector to go with it, and for every new allocation you can convince yourself you're doing it for performance.

Sometimes, I don't believe that people can believe what they are saying. This is one of those times, and you are one of those people.

[1] http://book.realworldhaskell.org/read/profiling-and-optimization.html

Name: Anonymous 2015-11-20 18:37

>>17,19
the programmer hopes the compiler is clever enough to optimize away the overhead, and at the end of the day it turns out it isn't.
You can get mad dosh for making problems from thin air. Read Machiavelli some time.

Look at all the cash generated by null-terminated strings. They're slow, hard to use safely, need escaping when including binary data, make algorithms that need a length require an extra scan, and scanning for the length is not only O(n), but thrashes your cache. People get grants and sell hardware for making it faster to scan for a 0 instead of, you know, including the length.

Name: Anonymous 2015-11-20 18:47

Assembly language is my mother tongue. It was trivial to write an ASM
bootloader when I had the ATA/ATAPI algorithm.

What you don't understand is that assembly language is my mother tongue.

You guys are young babies. I was your age when I was your age and now I have
advanced really really far and have divine intellect.

Name: Anonymous 2015-11-21 3:02

The only use for Assembly is to write Scheme interpreters.

Name: Anonymous 2015-11-21 23:52

>>18
malloc/free is dog slow for functional use compared to garbage collectors.
GC is shit. Therefore functional is shit if you insist it depends on it.

Sorry, but I'm an adult who programs to get things done, not a kid who's just tinkering around for its own sake.
So the reason you incur overhead that wont be optimized away is because you have to get things done.

These are solved problems.
I'm not impressed the solutions you are presenting me with.

>>19
If the overhead does become a problem
Overhead is always a problem. Once you've really realized the potential of a machine, it's hard to have patience for overhead, especially when the benefit is just adhering to someone's preferred style.

And while you're doing that, you can enjoy writing your own garbage collector to go with it,
Here we go again. GC is shit. If you can't optimize it away completely at compile time, your program is a product of bad design.

>>20
Nothing is stopping you from writing a string library that uses lengths.

Name: Anonymous 2015-11-22 9:17

>>23
I'm not impressed the solutions you are presenting me with.
Because you're a fucking tool who doesn't grasp the benefits of actual advancement in both productivity & runtime speed, because you play with toy problems.

Name: Anonymous 2015-11-22 10:58

>>23
Overhead is always a problem.
Do you live in a world where computer programs do not interact with humans in any way? Who gives a shit if the results come through 8µs faster, if it took an extra month of development time to remove the ``overhead''? Those are the kinds of numbers we're talking about here.
Or are you facetiously agreeing, and saying that there is always overhead, including in things like clock speed and memory round-trip time, and that in the slightly bigger picture, GC overhead is negligible compared to disk and network delays?
Unless your domain is pure calculations on an embedded, OS-less device, you will have overhead.

Here we go again. GC is shit.
You have immutable trees in C. You are updating a 1MB tree. You end up with two 1MB trees because of your naive implementation of your immutable trees. You decide to remove the memory overhead, by sharing data between trees old and new. However, you now need to keep track of which parts are OK to free because they belong solely to the old tree, and which parts are referenced by the new tree. You decide to keep a tag on each tree node saying how many references there are to it. Congratulations, you have reinvented GC.

Name: Anonymous 2015-11-22 11:11

>>25
Congratulations, you have reinvented GC.
And the worst and slowest variant of GC at that.

Name: Anonymous 2015-11-22 11:13

>>1
Weird, I read the first chapter of that book yesterday, and now I see a thread on /prog/ about it. What does it all mean?

Name: Anonymous 2015-11-22 17:15

>>27
This thread is a week old though.

Name: Anonymous 2015-11-23 8:39

>>24
I get what I need to get done and I get better results than I ever would using your mountain of crap that's written in C anyways. Haskell is not advanced. Don't kid yourself.

>>25
Haskell in its current form will incur overhead, and sometimes this overhead, whether it be time or memory, is enough to make your program useless for its purpose. And even if you have a fast computer with lots of memory and you are able to do what is required with a language that introduces overhead, aren't you curious to see what you could do if it could work 100 times faster and operate in virtual memory on datasets that were 100 times larger? It increases your capabilities which then inspire developments you may have never considered had you stayed on a slow platform that simulated using computers from 10 years ago.

Congratulations, you have reinvented GC.
You have a very narrow idea in your mind of a shared data structure. I bet you assume I use malloc and free as well. And you draw data structures with nodes and arrows like you're in elementary school.

Name: Anonymous 2015-11-23 11:41

>>29
I get what I need to get done

As I said, toys. Any purely fixed-function pipeline where the human can simply optimize a single static chain of execution is not an interesting or difficult problem. And even static pipelines can have different optimal performance characteristics depending on the runtime nature of the data it's processing.

But even in these simple cases, it's stupid that you spend your time wrangling bytes instead of teaching the machine how to do it for you. Have fun reinventing the wheel every single time.

Name: Anonymous 2015-11-23 19:42

>>29
aren't you curious to see what you could do if it could work 100 times faster and operate in virtual memory on datasets that were 100 times larger?
I might be curious if I had a dataset 100 times larger. As they say, YAGNI.
Read to the bottom of my link in >>19 where the author rewrote a chunk of Haskell until the generated ASM was as good as hand-written. The original code can't have even taken 2 minutes to write.
When it doesn't matter, ``overhead'' from not using C is negligible. When it matters, you can make the overhead disappear.

You have a very narrow idea in your mind of a shared data structure. I bet you assume I use malloc and free as well. And you draw data structures with nodes and arrows like you're in elementary school.
This is an amusingly substanceless rebuttal. If you are so enlightened, would you care to sketch out an immutable, shared tree, with minimal memory overhead, and without nodes and branches? Or are you saving that for your master's thesis? Or HIBT?

Name: Anonymous 2015-11-25 9:42

>>30
But even in these simple cases, it's stupid that you spend your time wrangling bytes instead of teaching the machine how to do it for you. Have fun reinventing the wheel every single time.
You assume too much about me. 60% of my time is devoted to meta linguistic abstraction. The difference between me and you is you depend on prepackaged abstractions that fit one and only one paradigm and introduce overhead, while I have the freedom to engineer my abstractions and how they target the machine I'm using. You force yourself to choose from a limited set of abstractions while I'm free to invent my own.

>>31
When it matters, you can make the overhead disappear.
In theory yes, but in practice no.

This is an amusingly substanceless rebuttal. If you are so enlightened, would you care to sketch out an immutable, shared tree, with minimal memory overhead, and without nodes and branches? Or are you saving that for your master's thesis? Or HIBT?
Using garbage collected nodes allows you to use a shared data structure with the most generic user interface. All your program has to worry about is managing references, which would be a concern in C using a gc library or reference counting, and built in to the language in others. If you restrict the user interface, you can do better. If you really study how your program is utilizing a shared data structure, you'll likely find the interface is overkill, and you can manage with a substitute in which lifetimes of objects can be anticipated. You could say I use these concepts but then optimize their use away at compile time.

Name: Anonymous 2015-11-25 11:46

>dubs

Name: Anonymous 2015-11-25 17:00

>>32
You know nothing, Jon Snow.

Working in Lisp lets you work with true abstraction at the [meta-]expression/declaration level all the way down to assembly code, and all the way up to however high you care to get, without writing fucking inline asm. It's a far more integrated system from top to bottom than any of the C tripe you promote. There is no single fixed abstraction, there is just the wild ability to create abstraction.

You force yourself to choose from a limited set of abstractions while I'm free to invent my own.

lol, again, have fun playing with bytes.

If you really study how your program is utilizing a shared data structure, you'll likely find the interface is overkill...
holy shit, you're insane. Are you the TempleOS guy?

Name: Anonymous 2015-11-26 9:02

>>34
Working in Lisp lets you work with true abstraction at the [meta-]expression/declaration level all the way down to assembly code, and all the way up to however high you care to get, without writing fucking inline asm.
Sounds like the opposite of Lisp.

Name: Anonymous 2015-11-26 10:20

The true power of lisp is that whenever you need to call a function, you can just return() to it

Name: Anonymous 2015-11-26 12:58

>>33
Who are you quoting?

Name: Anonymous 2015-11-26 13:16

>>37
He's quoting >>32. Are you blind?

Name: Anonymous 2015-11-26 14:06

>>37
le who are you epic quoting xD

Name: Anonymous 2015-11-26 15:52

>>39
Satire is the lowest form of comedy and the lowest form of wit.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List