I took linear algebra the same semester as computer graphics - the first half of CG was all about 2D stuff - Bresenham's line-drawing algorithm, things like that. First half of LA was pretty much everything I needed to know to understand matrix operations in OpenGL, which was the second half of CG. Worked great! I still remember most of the relevant stuff 20 years later.
The second half of linear algebra was a bunch of stuff about eigenvectors and eigenvalues. We never were given an explanation why you'd care about them, and I still have no idea why you'd care about them.
Eigenvectors and eigenvalues are (among other things) connected to something called diagonalization. In math, this is incredibly useful because it lets you describe a matrix as being put together from simpler, easy-to-understand matrixes.
Eigenvectors and eigenvalues are also a major focus of quantum mechanics. For example, when you measure the energy level of an electron in a quantum mechanical system, the measurement itself is a linear operator on the underlying wavefunction (linear operator = think "like a matrix"). The eigenvalues are the different energy levels of the electron, and the eigenvectors are the wavefunctions which are states of the electron with the corresponding energy level. You can experimentally verify that the eigenvalues correspond to spectral lines (the rainbow you see when you look at the substance through diffraction grating).
There are a ton of other things going on with eigenvalues and eigenvectors, they're used all over the place. If you want to understand a Markov process, for example, it can be described in terms of linear equations, and if it has a steady-state, it's an eigenvector.
Decomposing things into eigenvectors turns what used to be a complicated problem of coupled variables (the matrix), into a list of simple single-variable problems. It turns a matrix equation into a list of scalar equations. I hope this helps :)
Eigenvectors tell you which vectors are taken to a scalar multiple of themselves via a linear map. Eigenvalues tell you what those scalars are. That means some subspace is fixed by your linear map.
I'll pile on another common application of eigenvectors and values that hasn't been mentioned: principal component analysis in statistics/machine learning.
The second half of linear algebra was a bunch of stuff about eigenvectors and eigenvalues. We never were given an explanation why you'd care about them, and I still have no idea why you'd care about them.