A data structure deals with the way you store data. A singly linked list for example is a data structure. All you need to understand about it is that every node except the terminal ones points to one other node. The traversal of this structure can be called an algorithm, albeit a simple one.
Manipulation of the linked list often requires a traversal. Also, inserting data between nodes also require a little juggling of the node pointers. These are all what you can call 'algorithms'. Data structures can exist and be learnt somewhat independently of their underlying algorithms, but it is rarely useful like that.
There are however pure algorithms that do not care much about the underlying data structure - insertion sort is an example.
Most data structures though aren't very much useful when separated from their fundamental algorithms.
Thanks. My confusion stemmed from not actually taking Data Structures yet, along with the fact that some colleges call their second course 'Algorithms', while including a junior-senior year 'Algorithms' course as well.
Thank you. I've seen that book actually, as Professor Sedgewick has a Coursera Algorithms course. Would you say it's appropriate for a second course (following Intro to CS)?
I believe I once had a short book by CMU's Daniel Sleator and a co-author where they explained some examples showing how different data structures could change the complexity/run-time of algorithms.
Or am I incorrect in thinking the two are discrete concepts unable to be separated?