How is a csv not the most accurate representation of the data? If you trust the other agent encoded it properly in the db, then sure. Your flippant dismissal was inappropriate in tone and detracted from the rest of your opinion.
Cockiness tells me that you’re insecure about your knowledge, not that you know more than GP.
> How is a csv not the most accurate representation of the data? If you trust the other agent encoded it properly in the db, then sure.
The idea that a CSV would be more likely to be correctly encoded than a DB is hilarious, thanks for the laugh. But that you were confident enough to seriously put it in writing shows how little experience you have with CSV.
Yep, you got me. I'm actually a trapeze artist moonlighting on these forums.
A CSV file represents the exact digits that are to be stored. You have unlimited precision. You could even store irrational numbers, equations, or mix data types in a column. OTOH, you have to make sure the delimiting character is not present in the data - that can be pretty easy, if you use the field-separator character in ASCII, or even just a \t. I've even seen people terminate fields with ^8675309| because they felt confident no data would contain Jenny's number.
A database, like Excel, likes to conform data. This is usually awesome! But sometimes, it's not.
"A bird, like homo sapiens, has two feet" doesn't mean that birds are humans or humans are birds either. It means that in this respect, birds and humans are alike. Which is what GP meant: With respect to conforming data, databases behave just like Excel.
Language note: I think it's the fact that there are two commas around the inserted-clause [terminology?] ", like Excel," that does it. If there were only one comma, before or after "like Excel" it would read the way you read it. Can be tricky for non-native (and sometimes also native) speakers.
Cockiness tells me that you’re insecure about your knowledge, not that you know more than GP.