Hacker News new | past | comments | ask | show | jobs | submit | more greenhouse_gas's comments login

FFI is actually quite bad. You can't talk to native (C, C++, Go, Rust) code directly. You have to talk to Java, and if you need, from Java to Native.

You also communicate to Java through serialization, rather than through shared memory.

Oh, and any communication goes through a Future, which is annoying.


The secret ingredient is cash.


>If they'd named it sfo07s13-in-f14.google.com, then browsing to that URL sends google cookies. If it's some server from a recent acquisition that may not be up to Google's level of security, that's dangerous.

Sorry, I'm slightly confused.

I browse newproduct.google.com. My browser calls the DNS, asking for the IP. The IP comes back as 192.168.0.1 [1]. It connects to 192.168.0.1. Gets hit by an XSS, and sends your cookie value to evildoer.example.com.

How would it help you that the reverse-IP of 192.168.0.1 issfo07s13-in-f14.1e100.net? The browser doesn't know that. It thinks its going to newproduct.google.com.

[1]. Yup, that number is just an example.


Your example is not the same as the one in the comment you replied to. You picked a product hostname. The example was an infrastructure hostname.

The point is that Google (or any company with the same mindset) scopes down the number of machines that can receive your google.com cookie. Even their own machines often don't need it to do their job, so it's not worth the security risk to have your cookie sent more than necessary.


The real issue isn't that C doesn't have a standard int overflow, but that it's undefined.

What they could have done is made it implementation defined, like sizeof(int), which depends on the implementation (hardware) but on the other hand isn't undefined behavior (so on x86/amd4 sizeof(int) will always be equal to 4).


It's undefined for a reason.

  size_t size = unreasonable large number;
  char buf = malloc (size);
  char *mid = buf + size / 2;
  int index = 0;
  for (size_t x = 0; x < big number; x++) mid[index++] = x;
A common optimization by a compiler is to introduce a temporary

  char *temp = mid + index;
prior to the loop and then replace the body of the loop with

  *(temp++) = x;
If the compiler has to worry about integer overflow, this optimization is not valid.

(I'm not a compiler engineer. Losing the optimization may be worth-while. Or maybe compilers have better ways of handling this nowadays. I'm just chiming in on why int overflow is intentionally undefined in the Fine Standard)


Are you sure this was the intent of the standard writers back in the midlate 80s and not something that modern compilers just happened to take advantage of? I'd really expect it to be the former.


Integer overflow is certainly not undefined for this reason.

It's undefined because in the majority of situations, it is the result of a bug, and the actual value (such as a wrapped value) is unexpected and causes a problem.

For instance, oh, the Y2038 problem with 32 bit time_t.


>It's undefined because in the majority of situations, it is the result of a bug,

1. If it's a bug, it should overflow or crash (implementation defined, not undefined), or do what Rust does, crash on -o0 (or, if it's illegal to change defined behavior based on optimization level, create a --crash-on-overflow flag) and overflow on everything else.

2. There is plenty of code where it's intentional (such as the infamous if(a+5<a)).


You meant

    char * buf = malloc(size);
You dropped an asterisk. Since changing pointers returned by malloc() is a bad idea, I'd make it:

    char * const buf = malloc(size);


This is only useful if buf is involved in some preprocessor macrology which perpetrates a hidden mutation of buf.

   BIG_MACRO(x, y, z, buf); // error!
the programmer is informed that, to his or her surprise, BIG_MACRO mutates buf and can take appropriate corrective action.

It's also useful in C++, since innocent-looking function calls can steal mutable references:

   cplusplusfun(x, y, z, buf); // error: arg 4 is non-const ref
No such thing in C, though; function calls are pure pass-by-value.

Changing pointers returned by malloc is sometimes done:

   if ((newptr = realloc(buf, newsize)) != 0)
     buf = newptr;
   else
     ...
In my experience, C code doesn't use const for anywhere near all of the local variables which could be so qualified.

If you enact a coding convention that all unchanged variables must be const, the programmers will just get used to a habit of removing the const whenever they find it convenient to introduce a mutation to a variable. "Oh, crap, error: x wasn't assigned anywhere before so it was const according to our coding convention. Must remove const, recompile; there we go!"

If you want to actually enforce such a convention of adding const, you need help from the compiler: a diagnostic like "foo.c: 123: variable x not mutated; suggest const qualifier".

I've never seen such a diagnostic; do you know of any compiler which has this?

I think that the average C module would spew reams of these diagnostics.


> If the compiler has to worry about integer overflow, this optimization is not valid.

I'm sure it's still possible to come up with an optimization that takes into account signed-ness, and doesn't give in to performance or code-size much.


size_t is unsigned, overflow is defined.


The type of index is, however, signed int.


You're right, I read diagonally :)

However, the optimization argument for signed overflow seems weird to me, because I can't see any reason why this argument would not apply to unsigned overflow as well.

If we keep undefined behavior to optimize things like "if (n < n + 1)" when n is signed, why not do the same when n is unsigned?

Conversely, if there is a good reason not to, then why would it not apply to signed overflow as well?


This case is not worth optimising, because the index should be size_t just like the original size. Then the compiler knows it won't overflow, and doesn't have to check.


And, the fix is easy: just use types of the same width for the counter and the boundary. Using a narrower counter is just begging for errors to happen. This is not a good coding style, and there is no point in having the compiler condoning it.

Compiling it and making it run? Sure. Bending over backwards to ensure it runs fast? Hell no.


Just a nitpick. Implementation is about the particular compiler and runtime (stdlib) implementation, not the hardware. Hardware is the platform hosting the implementation (this are ISO C-standard defined terms).

A compiler targeting x86 platform can implement sizeof int == 8, or whatever it pleases, as far as C std is concerned.

In practice compilers dont get creative about this. But there are real world cases where stuff is different, for example: http://www.unix.org/version2/whatsnew/lp64_wp.html


The modern case for keeping signed overflow as UB is that it unlocks compiler optimizations. For example, it allows compilers to assume that `x+1>x`.

If implementations are forced to define signed overflow, then these optimizations are necessarily lost. So implementation-defined is effectively the same as fully-defined.


I suppose the question is, which of these optimisations are actually useful for the compiler to do automatically? Yours is the example that's always thrown about, but it always seems like the kind of optimisation that the programmer should be responsible for.


> on x86/amd4 sizeof(int) will always be equal to 4

Nothing is stopping your C compiler from making the guarantee sizeof(int)=4 on x86/amd64.


I think you are in agreement with the comment you are replying to.


The comment suggested the standard make it implementation defined rather than undefined. There's not a meaningful difference here.

Even today, an implementation may define unsigned overflow.


Yes, there is. Implementation defined means that a conforming implementation _must_ document its behavior.

That means that programmers don’t have to use trial and error to figure out how the compiler behaves and don’t have to _hope_ they found all the corner cases.


And that is how we get #if defined(_THIS_THING_SOME_COMPILER_DEFINES) && !defined(__BUT_NOT_THIS_ONE_THAT_COMPILER_X_DEFINES) soup ;)


Better than than silently ignoring an if guard preventing an overflow, and then overflowing anyways on addition.


Oh, I see, I wonder if greenhouse_gas is suggesting a feature similar to sizeof() that can be used to portably adapt your program's design to the target's overflow capability.


C language lawyer in training: sizeof is not a function.

The parentheses are part of the operand and only needed for type names, to make them into cast expressions.


How does it feel having your "life's work"[1] becoming big enough to be noticed, but ignored by the mainstream?

You created the first "better C" out there to get any traction, it had a perfect name (B -> C -> D), but due to factors that were partially [2] outside your control [3], D's niche got eaten by Go and Rust.

[1]. The D programming language. [2]. I've seen people blaming the D to D2.0 transition, but D was a niche language before too. I've also seen people blame D for not having an Open Source compiler for years, but neither did Java, and it took off anyways. [3]. Such as Sun's marketing, or the general push to scripting languages between 1995-2010 (such as perl, tcl, PHP, Ruby, Python and Node)


When I started, I was well aware of:

"Certainty of death. Small chance of success. What are we waiting for?"

https://www.youtube.com/watch?v=8joT0oFuGoI

For a language without a megacorp pushing it, D is spectacularly successful. Consider:

1. It's developed by volunteers who work on it for the joy of it without remuneration.

2. People who spend the effort to get past the learning curve find it very pleasing and productive.

3. Other major languages have been continuously copying D's features :-)

4. Corporations that have adopted it for mission critical use have told me that D is their "secret weapon" that allows them to out-maneuver and out-innovate their competitors.

It's like the music business - you don't have to be the Beatles to have a very good career.


Walter, I have been using your products way back since before Zortech, at the time it was the only C++ system that I could get to work with the Phar Lap DOS extender to create big binaries.

Thank you for all your hard work, your products formed the center on my early career, and most of what i know i learned using your tools.

Note I particularly loved "Zed" the text editor that came with the early toolchain.

Regards Tim Hawkins, a grateful hacker.


Welcs!


> It's like the music business - you don't have to be the Beatles to have a very good career.

I think you're too humble here. The quality of D is no lower than the competing products backed by huge corporations.

I also vaguely remember using Symantec C++, it was a refreshing alternative to the competitors of the time, and I had very pleasant experience using it.


> It's like the music business - you don't have to be the Beatles to have a very good career.

That is an inspired quote. I shall steal that and use it whenever anyone argues "why x when y already exists".


>whenever anyone argues "why x when y already exists".

Yes, seen many people argue that, including to me, e.g. about some of my blog posts, which show hand-rolled ways of doing things when a library for that exists. I guess often such people are not doing anything much themselves, and don't get that people can create something in the same area where something else exists, for any number of reasons, even just for sheer fun, love of the field, or for teaching beginners.


or simply to have control over the code.


Yes, good point. When it is not a blog post but real-life project code, I sometimes do it for that reason you mention.


> "Certainty of death. Small chance of success. What are we waiting for?"

You are an inspiration Walter !!


My favorite line from the movie.


> It's like the music business - you don't have to be the Beatles to have a very good career.

For someone who is in both the music business, but also a D programmer for many years, I got to say this was the most inspirational quote I have ever read and it'll be my inspiration in both fields.


> D's niche

That represents a misunderstanding of D. Unlike Go and Rust, which were built for one specific use case (Go for the work done at Google, and Rust for writing a browser), D is an all-around good general purpose programming language. If it did have a niche, it would be as a C replacement, and it is the best available tool for that.

Your comment also implies that D is not used. Download statistics, forum activity, and the existence of users suggests otherwise.


Rust was built to be good for browsers, but also as a general purpose language. We’ve made decisions in the past to not include things the Servo folks wanted, for example.


>Vi is on every unix system

Except Debian's install CD. They only have nano.


Are you sure? I just looked through an old Debian 8 iso that I have and it had vim in it.


>Last year, afer a firejail local root exploit got released

Is it worse than running Firefox without firejail?


Well it depends, does your Linux account running Firefox has the possibility to access root (sudo, su)?

If yes, I don't know. Maybe a 'strong' apparmor/selinux policy might capture some exploits, firejail tries to mitigate?

Other yes, clearly: A Firefox exploit would usually not result in root access (unless it's combined with other Linux exploits) - in the case of firejail, it would have resulted in a root exploit.

I'm not saying: Don't use firejail at any cost. But I'm trying to say that you shouldn't have a false confidence in your security, just because you are using firejail and this because their current practices doesn't seem ideal for a security product. At the moment firejail advocates sound like that firejail is 'a proper security solution for Linux desktop', but given the circumstances, it's not.

might be worth checking out tor-browser-(bundle?) apparmor profile/s


> i can with zero effort and 100% confidence run any old Go code i have lying around. that's pretty awesome.

I think you can do that in Rust too (post 1.0). What's more impressive is that canonical go code from five years ago is still canonical.


whether that's impressive or not is very much a matter of perspective -- it is certainly remarkable! :-)


They can and do. For example, you can get Marshmallow on the Samsung s2, which didn't officially didn't get even Lollipop.


I bet that this isonly possible because someone got the drivers for display, touch, camera, radios, etc, ...in binary form from the last oem image and put in the new one. sometimes they will backport the last kernel from android 5.1 into a 6.0 image so the binary drivers still work


I had a S2. It died on October 2016 after 5 years of good work. It still has some good points against more modern phones but I won't recommend to buy a refurbished one now. I won't trust the hardware to last much longer.


The real problem is that you can't get a phone [1] which can run upstream kernel. So while there's postmarketOS and Replicant (both trying to port regular Linux (for lack of a better word[2]) to phones), very few phones work properly (and most are fairly old and not sold anymore, IIRC Replicant's newest phone they support is the European S3 - guess how old that thing is).

So now, for each device, you have to port desktop Linux [3] to whatever generation kernel it came with.

Debian, in contrast, can assume that you're using the standard modern kernel.

[1]. I mean that it should work as a phone, not just a small tablet. So if it can't make phone calls, it ain't a phone. All the more so if you want a working GPS and Camera.

[2]. I would have said Gnu/Linux but postmarket is based on Alpine which is based on busybox/muslc and not gnu.

[3]. Anything which issues syscalls is fair game, so, for example, if your oldest phone you want to maintain is six years old, then you have to back port every single program (and library!) to work on a random seven year old kernel.

Even back porting android user space (which is mostly Java and doesn't interact directly with the kernel) is a huge pain (and sometimes is actually that hard that the maintainers just give up). Porting back Debian?


postmarketOS tries to bring a desktop-style, Alpine-based Linux distro to mobile devices. It has some level of support for relatively recent (~2015) devices, but there's still a lot of work to be done.

Replicant is working on creating a fully free software distribution of Android, rather than trying to provide a desktop-style Linux.

Porting programs isn't generally a huge deal. Most devices on postmarketOS currently use the old downstream kernel, and most things run fine on it.

The main problem is that a lot of peripherals require closed source drivers and firmware. For instance, very few devices have support for hardware-accelerated graphics or wireless connectivity, both of which are necessary on a mobile device.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: