I do not like the many useless types while we could just define our own n-bit integer also a a better typesystem for compile time checking would be nice I also would like pure functions that I would be able to use outside of functions just like I do in macros
>>5 yes but I can not define an atleast 128 bit variable for example What if I want a 24bit variable? I can't do it in standard C. The defalt typed are just weird, they do not show their size, is there any portable type that shows that I want to use only 32 bit (except uleast_32_t)? there are many systems where long is 64 bit and since the standard says that int is atleast 16 bit I can not use it. It feels like a waste of memory
Sure, this [b]BBS[/B] Software is shit but is it abelson shit?
Name:
Anonymous2014-04-09 18:04
The variables are going to be aligned to the next multiple of the word size anyway; you won't lose anything by just using a unpacking\packing macro if you absolutely must access by bit. It is more complicated to do arithmetic with, say, two 512 bit numbers, but it gets evaluated in software anyway, so just make it manually.
Name:
Anonymous2014-04-09 18:06
- Add GC - Make it interpreted - Remove curly braces - Force indentation - Change printf to be a statement instead of a function
- A real module system instead of the preprocessor. - Better support for whole program optimization / LTO (requires its own intermediate representation). - Standard primitives for vector operations. - getcontext / setcontext instead of setjmp / longjmp.
If you do have some bizarre machine that works on 24 bit integers, the C compiler for that machine will certainly provide an exact width type that you can use for int24_t and uint24_t.
>>9 Most of those requirements are at odds with each other.
except keep the ability to do cross-language libraries easily
Except this will never happen for any language that tries to standardize much more than structure layout and calling convention. The architecture specific appendices for the SysV ABI alone are tens of pages in length. Language designers love writing their own runtimes and hate compromising their elegant re-inventions to accomodate others. Even if you can pay people to play nice (like Microsoft did with CLR) the resulting standard will need to be modified for every new feature that's above the level of syntactic sugar.
The bottom line is you will always need to write a translation layer to interface code in two languages if said languages weren't designed to interoperate in the first place.
I now realize this isn't related to this thread. But I'm going to post it anyways. Because I want C to do this too. Although I don't quite understand what I want it to do.
>>5,24 I don't understand this fixation on integer types either. It clearly can't be founded in a desire to interface with hardware that uses certain size integer types, since (in practice) the provided primitive types map directly to machine operand sizes.
Do lots of people need to be able to parse binary data that uses arbitrary size bitfields? Doing that in C is painful because doing it in machine language is also painful. The solution is to use a library or domain specific language that's intended for that.
Name:
Anonymous2014-04-10 14:05
int, uint, long, unsigned long, long long, unsigned long long, uint8_t, uint16_t, uint32_t, uint64_t
Just make it exactly like Java is now, except keep the ability to allocate memory with malloc.
Name:
Anonymous2014-04-11 11:17
>>35 Malloc, eh? Kid, I have a story about malloc.
I once freed a memory I didn't need. Oh, then I just realize I need some data of that memory. Now, instead of removing the freeing part of my code, I'd do something much more exciting.
I knew the malloc implementation we were using managed the memory in LIFO-manner, so basically I could get the same memory back. I just allocate more memory, with exactly the same size the freed buffer was. I'd get that data back. Ingenious!
Heh heh, that code is still in production use. In a software called OpenSSL.