Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

farbfeld

Name: Anonymous 2016-01-13 1:50

farbfeld is a lossless image format which is easy to parse, pipe and
compress.
It has the following format:

| Bytes | Description |
|--------|------------------------------------------------------------|
| 8 | "farbfeld" magic value |
| 4 | 32-Bit BE unsigned integer (width) |
| 4 | 32-Bit BE unsigned integer (height) |
| [2222] | 4⋅16-Bit BE unsigned integers [RGBA] / pixel, row-aligned |


http://tools.suckless.org/farbfeld/

Name: Anonymous 2016-01-13 2:42

Why 32 bit? Why not 36 bit? How am I supposed to make an efficient implementation in my PDP 10?
Why RGBA? What if I want to use a better format? And why should I be limited to 64-bit-per-color?

Dependencies
libpng
libjpeg-turbo
In turn libpng depends on zlib
Now, I question, why does something that simple that even a kid could make depend on them? And why on libpng and zlib specifically, considering that they are not suckless at all.

Name: Anonymous 2016-01-13 3:13

If you're not going to have parameters for color format, why bother with parameters for width and height? Just be all "suckless" and fix images to 4096x4096 while you're at it, because that's in line with all the other shitty, laughable wastes of time you already drain your life on.

Name: Anonymous 2016-01-13 3:24

For example, farbfeld always stores the alpha-channel, even if the image doesn’t have alpha-variation. This may sound like a big waste at first, but as soon as you compress an image of this kind, the compression-algorithm (e.g. bz2) recognizes the pattern that every 48 bits the 16 bits store the same information. And the compression-algorithms get better and better at this.

HAHAHAHAHAHAHAHAHAHAHAHAH!!!

Dude, just accept the fact that you're too stupid to be making decisions on this.

Name: Anonymous 2016-01-13 6:30

>>4
What's the problem? That's simple information theory. Even a naive Huffman encoder could recognize that.

Name: Anonymous 2016-01-13 6:37

>>5
Are you even serious? A Huffman encoder doesn't even perceive anything of the location of bytes in a stream, only their frequency.

Name: Anonymous 2016-01-13 8:38

Just compress bitmaps with zip/gzip/7z.
Its practically what png does internally.

Name: FRIGN 2016-01-13 11:06

>>2

Why 32 bit?

It's a power of 2, unlike 36. :P No, to be exact, there's no reason to go beyond that, given 2^36-1=4,294,967,295 is large
enough for anything. Even the largest images ever easily fit their sizes into 32 Bit. The next step would be 64, but that's overkill.

How am I supposed to make an efficient implementation in my PDP 10?

Sorry, no support for PDP-10's :P

Why RGBA? What if I want to use a better format?

A better format like? CMYK? Get off here man! :P
I actually worked on a CIELAB+Alpha implementation a year ago, it didn't work out though. I talked about it in my farbfeld talk at slcon2.

And why should I be limited to 64-bit-per-color?

*per-channel. Because you will probably not need more and I wanted to go for the 99.5%.

Now, I question, why does something that simple that even a kid could make depend on them?

The dependencies are needed to convert png's to the farbfeld format. It doesn't have anything to do with the format itself.
I went for libpng anyway because it's most widespread, even though it sucks balls.

Name: FRIGN 2016-01-13 11:07

>>8

Actually, I meant *per-pixel. It's 16-bit-per-color.

Name: FRIGN 2016-01-13 11:08

>>3

Nice try, troll :P

Name: FRIGN 2016-01-13 11:09

>>4
>>5
>>6

Well, before we debate too much about information theory, we might all agree on the fact that it's about pattern recognition. That's what I talked about in the first place.
How else would compression algorithms work?

Name: FRIGN 2016-01-13 11:10

>>7

I agree. Farbfeld is very close to just being a bitmap-stream, however, with minimal metadata (width+height) and a magic value at the beginning.
I bet people are already using numerous similar formats internally in their projects. It's helpful to unite these things under one name so everybody can have a reference.

Name: Anonymous 2016-01-13 11:19

>>11
Let this thread be an immortalized testament to the cluelessness of "suckless".

Just to humor you, here's a real experiment you can run:

If all those same alpha values truly compress into the information "Every N bytes there's always 2 FFs", then the compressed form would only need a handful of bytes to represent the entire file's alpha.

Take an image with a constant alpha channel value, and remove alpha to yield 48-bit pixels instead.

If your rationale is true, the compressed file size will not change much at all between the file with an alpha channel, and the one without.

Do this with a large image, and post the file sizes.

Name: Anonymous 2016-01-13 11:23

>>1
It depends on Google's Go therefore it's top bloat.

Name: Anonymous 2016-01-13 11:40

>>13
(and don't do something stupid like a massive blank white image. Use something noisy and photographic.)

Name: FRIGN 2016-01-13 11:44

>>13

In information theory, you assume an "ideal" compressor which basically goes as far as entropy goes.
Okay, the test case you are giving is problematic, because removing the alpha channel from an example image will yield to new neighbouring pixel.
If there's any correlation between the neighbouring B and R values, the image will further shrink down.

Did you know that A.I.-development and compression-algorithm are basically the same field? I'm sure that we'll all smile about it in 10 years when smart compressors really go to the next level.

Name: FRIGN 2016-01-13 11:45

>>14

Nope, it's an external project to be used inside Go. The tools are written in C.

Name: FRIGN 2016-01-13 11:49

>>15

Okay, I found a proper testcase:

IMAGE1: random RGBA
IMAGE2: random RGB with FF Alpha channel

We can assume IMAGE1 to have full entropy, so in the ideal case, it would recognize that 1/4th of the data has a very low entropy and will shrink the size down by 25%. Let me build something, because this topic actually interests me as well.

I will of course test it with bzip2.

Name: Anonymous 2016-01-13 11:49

>>16
I've worked independently in AI and compression research for quite a few years each. In my expert opinion, you have absolutely no fucking clue what you're talking about.

You're committing to designs that are based on assumptions you just pulled out of your ass, just because it allows you to shit out short do-nothing code.

Name: Anonymous 2016-01-13 11:52

>>18
The test case I described would be random 64-bit RGB + A=FF, vs random 48-bit RGB.

Name: Cudder !cXCudderUE 2016-01-13 11:57

BE
Backwards-endian instead of logical endian? Idiot.

Name: FRIGN 2016-01-13 12:08

>>20

Okay, test results are in:

8.1M Jan 13 12:53 hurl.ff
4.1M Jan 13 12:54 hurl.ff.bz2

8.1M Jan 13 12:56 hurl_noalpha.ff
3.6M Jan 13 12:56 hurl_noalpha.ff.bz2

Where hurl.ff is basically IMAGE1, a random set of RGBA values.
hurl_noalpha.ff is IMAGE2, the version with FF alpha values.

What I did is the following (for reproducibility):
I created a new image in GIMP, 1024x1024 and applied the Hurl-filter (which basically randomizes all RGBA values) (yes it's really called like that :P).
Then I wrote a little program to only set the Alpha value to 65535 (32 bit unsigned max value), yielding in hurl_noalpha.ff.

Now let's compare the values:
The uncompressed size obviously didn't change. But let's compare the bzip2-versions, which surprisingly still found some way to compress the data to half.

3.6/4.1 ~ 0.88 -> bzip2 managed to recognize the Alpha channel and saved 12% of space in that process.

Given this is a statistical issue, the tests could be repeated multiple times (with a lot of random data). However, we see the trend here. Bzip2 is not an ideal compression algorithm, however, 12% of maximum 25% is actually quite impressive.

And I don't know what this alleged AI compression research expert wants. It's quite obvious that being able to predict upcoming data patterns may be the key to even better compression.
On the other hand, if he really is a compression researcher, it isn't surprising that this field is not dramatically proceeding. :P

Name: FRIGN 2016-01-13 12:09

>>21

It's the network byte order. If this goes beyond your small head it's not my problem.

Name: FRIGN 2016-01-13 12:10

>>19

troll troll troll :P

Name: Anonymous 2016-01-13 12:12

>>22
Those results are expected. Now run the test I described:

Compare the 3.6M result against a 48-bit compressed file.

Name: FRIGN 2016-01-13 12:27

>>25

Okay, here are the results

6.1M Jan 13 13:25 hurl_48bit.ff
3.1M Jan 13 13:25 hurl_48bit.ff.bz2

I cut out the Alpha value, so it only stored RGB-pixels. Now, as we can see, this gets close to 25% based on the original, which means that we properly assessed the test's circumstances.

Name: Anonymous 2016-01-13 12:53

Name: hojad !OUKY5mcbp6 2016-01-13 12:54

>>22
65535 is the max value of an unsigned 32-bit integer? u must be the 1 with small head m8

>>23
when does this 'network byte order' meme die?

Name: Anonymous 2016-01-13 12:55

>>26
The result is that we have proven that always keeping the alpha around adds somewhere around 16-19% bloat to a file (3.1M → 3.6M) even in the best case of constant alpha.

The compression does not and cannot (with a tractable general compression algorithm) make up for that extra baggage on every pixel.

Not to mention that we can do the same test on 16 bits per channel vs 8 bits per channel. The latter will drastically reduce the filesize, again showing that your preference for tiny code produces needlessly bloated data files. 16 bits is great when it's needed. It's a waste of space and sucks (to use your own terminology) when only 8 suffice.

Thanks for shitting up computing with your extra bloat, faggot.

Name: FRIGN 2016-01-13 13:55

>>28

Yeah, nvm, I meant 16-bit. You knew what I meant.

"Big-endian is the most common format in data networking; fields in the protocols of the Internet protocol suite, such as IPv4, IPv6, TCP, and UDP, are transmitted in big-endian order. For this reason, big-endian byte order is also referred to as network byte order."

In the end it doesn't matter which endianness you use. If I used LE for everything, I would still have to use the endianness-conversion-functions for the BE systems it may run on. The rest is cool. ;)

29

All valid points you made, no need to become insulting, else you'll notice people will stop talking to you.
YMMV of course, why so charged? Nobody is forcing you to use farbfeld anyway, so if you prefer another format, feel free to use it. Today alone I talked to 4 people who told me that they would've loved to have farbfeld in previous problematic situations. These people alone showed me that the time I spent on this was worth it. ;)

What projects are you working on at the moment?

Name: Anonymous 2016-01-13 14:36

>>30
You present an argument that you don't need to do what everybody else does to keep file size down. That argument is incorrect.

Besides, aren't you supposed to have and defer to tools that do 1 thing and do them well? You already defer out to ImageMagick for conversion, if I remember right. Graphics compression is already a solved problem, you're just creating new substandard solutions in the space again.

Besides besides, suckless is invading /prog/ and needs to be shot down. Good thing its so easy, with situations like this, touting what you don't know in defense of bad architectural decisions.

I'm currently building automated security software, which incidentally is talking to all sorts of heterogeneous data sources and making sense of it all combined. Successfully. Because software should handle multitudinous difficult cases, not expect there to be only specification for the world, tailored only for your data/code preferences, and ignoring the reality of everything else.

These people alone showed me that the time I spent on this was worth it. ;)
Oh come on. If you spent more than 1 hour on this, you fail.

Name: Anonymous 2016-01-13 18:11

>>11
Optimise your quotes ``please''

Name: Anonymous 2016-01-13 18:22

>>1
Hey! That was my idea. I thought of it back when I was 13 years old while in the bath.

Name: Anonymous 2016-01-13 18:25

This is just the bimp format.

Name: Anonymous 2016-01-13 18:47

>>34
bimp has more features, in the same way down's patience have more chromosomes.

Name: Anonymous 2016-01-13 20:43

One is forced to use complex libraries like libpng, libjpeg, libjpeg-turbo, giflib and others, read the documentation and write a lot of boilerplate in order to get started.
Nah, you're just using crap libraries if you need to write boiler plate. There are better libraries for existing formats.

Name: Anonymous 2016-01-13 20:51

Dependencies
libpng
libjpeg-turbo

Name: Anonymous 2016-01-13 21:25

There are incredibly few use cases where you're going to be dealing with pixels serially. The best place for a bitmap is in RAM, where random, parallel, and 2d box access can happen. Every graphics format library exposes that view already, just with better compression when serialized.

Name: Anonymous 2016-01-14 0:07

This is a classical example of Unwarranted Self Importance. Anybody can come up with a trivial data structure to store a picture in a file.
Image formats exist for a reason, at first normal people don't enjoy writing tedious code that re-implements a format, second built in compression in the standard actually reduces dependencies, third the point of having a format that is complex enough to convey information on what you are storing is actually pretty nifty if you don't live in a cave and somebody else might someday run your code.

suckless, cat-v, etc... the most insufferable assholes in programming.

Name: FRIGN 2016-01-14 1:25

I watch anime like princess tutu and magical lyrical nanoha.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List