Hacker News new | past | comments | ask | show | jobs | submit login

It's more or less public knowledge. You can find it yourself by running "strings" on the Twitter app binary. Any attempts on Twitter's part to limit the disclosure of these tokens would almost certainly invoke the Streisand Effect.



Couldn't another app use these tokens and take advantage of lax api limits ?


Yes. And that's the point of the disclosure.


Anyone know what are the API limits for these keys? Is Twitter really favoring this key, or is that hypothetical?

Of course, you still have to log in as a user, and Twitter could blacklist accounts that use this key on non-Twitter apps, which are going to have a lot of 'tells' and a specific signature in patterns of how they use the API.

(Twitter could even take advantage of that by hiding a code in a usage pattern, kind of like the POW who blinked in Morse code when he was put on TV)


> Is Twitter really favoring this key, or is that hypothetical?

In at least one way, yes. New third-party Twitter clients are limited to 100k users, but Twitter's official clients are unlimited. If those clients built in a "use your own authentication token" UI, you could put your official client's tokens in and work around that limit.


> Is Twitter really favoring this key, or is that hypothetical?

I don't know about API quotas, but I'm totally sure that they allow more than 100K tokens.


On Android, the foss client Twidere let users change the tokens in the options. https://play.google.com/store/apps/details?id=org.mariotaku....

The Chrome app Hotot too. https://chrome.google.com/webstore/detail/hotot/cnfkkfleeioo...


NekoTsui supports to change consumer key/secret. https://itunes.apple.com/app/nekotsui/id476924886


I think Apple will simply not permit applications that use these keys and are not official clients in the App Store. Looks like something that is pretty easy to automate.


You presume that one would use the keys on iPhone. No reason you couldn't run them on a Linux box in AWS...


I'm not sure why Apple would play police for Twitter, though.


Isn't Twitter integrated into Apple's mobile operating system? Such tight partnerships is plenty reason for them to play police for Twitter.


Yes and no. Apple wants to protect their Twitter partnership, but... Apple knows that there aren't any effective police in the park next door. So the question is whether Apple values their Twitter relationship enough that they're willing to cede most of the future energy and enthusiasm around third-party Twitter clients to Android.

It's possible, but I don't think it is at all an easy call.


How would Apple know that the app uses these keys? If they run something similar to strings then all you have to do is store the keys in some kind of obfuscated form.


Right but as soon as the press find out, and they will, that developer account will be banned. Most devs won't see it as worth the risk.


What responsibility does Apple have to Twitter except the notification center widget?


Twitter did this to themselves. Without the limit, this information is worthless. It'll make sense for an app like Tweetro[1] to add custom token as a feature or easter egg.

1: http://www.theverge.com/2012/11/11/3631108/tweetro-user-toke...


> Without the limit, this information is worthless.

Not true. Say you have a malicious Twitter client app that posts "Lose Weight In 30 days! <link>." Normally, Twitter could shut this offending app down by rejecting their client ID/secret; if they're using the official Twitter creds though, doing so would shut down all official Twitter apps in the process.


They already have spam systems in place to catch repetitive spam tweets and block them.


It would be possible to obfuscate a secret by storing it in several parts and combining them at run time. Still very far from secure, but this would require much more effort to extract the secret from the app.

Anyone: what is best practice here (Android and/or iOS)?

Edit:

Storing application secrets in Android's credential storage [1]. I have no idea how secure this actually is.

Should I obfuscate OAuth consumer secret stored by Android app? [2]

[1] http://nelenkov.blogspot.co.uk/2012/05/storing-application-s...

[2] http://stackoverflow.com/questions/7121966/should-i-obfuscat...


> It would be possible to obfuscate a secret by storing it in several parts and combining them at run time.

Then run `strings' on virtual memory image of the offending process. Same difference.


Correct me if I'm wrong here but I believe that then all one would need to do is stick an SSL intercepting proxy (such as http://mitmproxy.org/doc/ssl.html) in the middle and get the keys from there.


That depends on how the secret is used by the client to authenticate with the remote service.

If the client just sends the secret as part of an authentication request, then a proxy would reveal it. But if some form of challenge/response [1] process is used, where the value sent is derived from the secret and an unpredictable challenge sent by the remote service, then as far as I know a proxy wouldn't help.

I don't know enough about the details of the Twitter/DropBox/etc APIs work to know if they use challenge-response.

[1] http://en.wikipedia.org/wiki/Challenge%E2%80%93response_auth...


It's at least possible to accept only the keys from your real servers, which would stop this attack.

For what happens in the real world, see Georgiev et al.'s "The most dangerous code in the world" at https://crypto.stanford.edu/~dabo/pubs/abstracts/ssl-client-... (spoiler: I described this paper in our internal knowledgebase as "very readable. Promises lots of facepalming and delivers in spades.")


Incorrect. The consumer-secret and access-token-secret are not transmitted from client to the server during oauth. They are only used to sign requests.


Clarification: For v1. In v2 the secrets just go over SSL.


Only if the app uses the phone's certificate store, as opposed to a hard-coded one.


For Android, I suppose you could just run a Java bytecode obfuscator before converting the bytecode to Dalvik. There doesn't seem to be something comparable for iOS.

One simple solution is to set N-1 arrays to random data (hardcoded or generated at compile time) and set the last array to the real secret XOR random array #1 XOR random array #2 XOR ... XOR random array #N-1; this doesn't exactly stop a determined attacker, but it does stop "strings".


There's very little point obfuscating strings in iOS - since you can attach a debugger to the binary itself on jailbroken handsets (or using cycript) you can step through to the method(s) that use the secret keys and pull them out from there.


Unless you know the device isn't rooted this doesn't really achieve very much. On a rooted device an "attacker" could have replaced the crede tial storage with something that will conveniently store the data unprotected.

It is helpful as a way of ensuring random applications don't get hold of the data, but not for keeping the data from a determined user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: