Hacker News new | past | comments | ask | show | jobs | submit | oretoz's comments login

There is an API available from many mobile operators called SIM Swap that can be called to find out if the SIM has been swapped. Often these calls are made by Banks to an aggregator (such as Telesign or Twilio) that finds out the mobile operator (e.g. Verizon or AT&T) that the customer belongs to and calls SIM Swap API on that operator. There is currently no universal app for that purpose but it can be developed. See CAMARA Project for latest specifications of this and many other APIs.


As others have mentioned, SIM Swap attacks are very common where the attacker impersonates the victim and convinces the mobile operator to transfer the victim’s phone number (known as MSISDN in telecom parlance) to the attacker’s SIM. If you Google SIM Swap, you will find many instances of it.

From that moment onwards, all the 2nd factor SMS OTP go to the attacker.

There are APIs that are provided by mobile operators via aggregators such as Telesign, Prove, Vonage, Twilio etc. that can be used to check if a SIM Swap has happened recently on that phone number. That API is used by fintech companies and others e.g. when they want to check if a fund transfer is to be allowed or flagged up.


After trying to reach out to several of their key people on LinkedIn, I tried to reach out to them via a form on their site. This was to GIVE THEM SOME BUSINESS. Never heard anything back. This was for Telco related product that we have so not sure if it is just the Telco people in GCP who are this bad or the rot is wider than that.


Exactly same for me too. I watched his interview with Aarthi and Sriram of a16z yesterday and the more I tried to understand him, more he seemed like not just a BS peddler but deluded as well. May be it is just the two of us?


ccache is used together with distcc at the current place I am working at. Started digging at how these two work as I thought there is still room for improvement in our build times that can vary between 10 minutes to 1 hour. It is a huge code base, easily more than a million lines and around 18k files. But had to stop as there were way too many features to develop and bugs to fix. Also, management does not see that kind of work as useful so no point fighting those battles.


My codebase is significantly larger than yours (mine's a mix of mostly-C++ & some C) — perhaps 10–12 million lines. Clean builds are ~10m; clean-with-ccache are ~2m; incremental are millisecond.

I know this probably won't help with your current project, but you should think of your compiler as an exotic virtual machine: your code is the input program, and output executable is the output. Just like with a "real" CPU, there are ways to write a program that are fast, and ways to write a program that are slow.

To continue the analogy: if you have to sort a list, use `qsort()`, not `bubble sort()`.

So, for C/++ we can order the "cost" of various language features, from most-expensive-to-least-expensive:

    1. Deeply nested header-only (templated/inline) "libraries";
    2. Function overloading (especially with templates);
    3. Classes;
    4. Functions & type definitions; and,
    5. Macros & data.
That means, if you were to look at my code-base, you'll see lots-and-lots of "table driven" code, where I've encoded huge swathes of business logic as structured arrays of integers, and even more as macros-that-make-such-tables. This code compiles at ~100kloc/s.

We don't use function-overloading: one place we removed this reduced compile times from 70 hours to 20 seconds. Function-overloading requires the compiler to walk a list of functions, perform ADL, and then decide which is best. Functions that are "just C like" require a hash-lookup. The difference is about a factor of 10000 in speed. You can do "pretend" function-overloading by using a template + a switch statement, and letting template instantiation sort things out for you.

The last thing is we pretty much never allow "project" header files to include each other. More importantly, templated types must be instantiated once in one C++, and then `extern`ed. This is all the benefit of a template (write-once, reuse), with none of the holy-crap-we're-parsing-this-again issues.


I love your comment and it is 100% spot-on. extern C is the magic sauce for making anything fast.

The only downside is that it adds a ton of boilerplate and a lot of maintenance overhead. You need separate compilation units for everything and then you need a sub-struct to use the pimpl approach. Fast pimpl (in-place new in reserved space in the parent struct itself) gets rid of the heap allocations but you still have a pointer indirection and prevent the compiler from properly stripping out unused code across translation units normally (that’s where LTO comes in these days).

Really, the problem is just that it’s a PITA to write compared to sticking everything in the header file.

(It’s ironic that rust meets the first two rules by design but is still much slower than C++ to compile, though it does imply what’s already known, specifically that there’s a lot of room for improvement.)


You are probably aware, but for. others with ccache this is called "cache sloppiness", which is my favourite term.

You can set this via config, as by default ccache is paranoid about being correct. But you can tweak it with things like setting a build directory home (this is great for me, as I'm the only user but compile things in say `/home/josh/dev/foo` and `/home/josh/dev/bar` and have my build directory as my dev directory and it's shared. (see https://ccache.dev/manual/latest.html for all the wonderful knobs you can turn and tweak).

Fantastic tool, the compression with zstd is fantastic as well.

I played with distcc (as I have a "homelab" with a couple of higher end consumer desktops), but found it not worth my time as compiling locally was faster. I'm sure with much bigger code bases (like yours) this would be great. Reason I used it it that archlinux makes it super easy to use with makepkg (their build tool script that helps to build packages).


The best use of distcc "at home" is when you have one or more "big iron" (desktop, server, whatever) and a few tiny machines that work just fine but don't have much processing power.

For example, with some work, you can setup distcc to cross-compile on your amd64 massive box for your raspberry pi.


For builds that large, I (personally) start evaluating Bazel. Bazel has distributed build + shared cache features built-in. But I’ve always just dug into reducing build times in any large C or C++ code base I’ve worked on—damn what management says is important. And the switch to Bazel can be costly (effort) and it may be difficult to get team buy-in.


Brings back memories. Fresh out of college, I was given the additional job of being the Clearcase and Unix admin for my team. Not that I had any special skills but others didn't know a few Unix commands (System V) that I did. But Clearcase was such a good product and was used in the Telecom companies that I worked for (Motorola, Lucent etc.) It was owned by Rational at that time and if memory serves me right, were acquired by IBM.

To this day, I find Clearcase's way of doing things is the better way to do version control. Git, in comparison, kind of feels alien and I could never really get the same type of comfort on it.


Here are the key differences between 5G and Wifi: 1. Dedicated vs shared spectrum. Though all big countries have shared spectrum initiatives for 5G too but it is still not a free for all like Wifi. So interference-wise 5G might be better for some use cases. Have heard about that in several shipping ports where Private 5G is deployed 2. Security. Due to the usage of SIM but Wifi security is good too 3. Range - though most of 5G is in comparable frequency ranges with Wifi, there is a huge range of powers at which 5G base stations can transmit so range is possibly larger for 5G

But it all depends on use-case and there is no clear winner for all situations.



Very interesting indeed. You seem to know about this. Do you have any links where I can find more details (which is not easily Googleable?


Rich Roll discusses this barrier in his podcast with Andrew Hiberman https://www.youtube.com/watch?v=SwQhKFMxmDY


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: