Let S be an arbitrary solution to our problem and let Sopt be the solution given by our algorithm that solves the problem.
1) In the exchange argument, you construct a new solution S1 that is no worse than S. You then construct a new solution S2 that is not worse than S1. Continuing this process, you construct a sequence of solutions S,S1,S2,…,Sn−1,Sn,Sopt such that each solution in the sequence is no worse than the previous solution. In words, you transform an arbitrary solution S into Sopt in finitely many steps, with each step not hurting the quality of the solution.
2) In the staying ahead argument, you take a solution S and show that at every step in your algorithm, Sopt is no worse than S. In particular, by taking S to be any optimal solution, we show that at each step in our algorithm, Sopt is no worse than S, hence must also be an optimal solution.
That being said, I wouldn't get too hung up on the difference between these two types of proof outlines. I didn't even know before TAing this term that there was actually a definition for these types of arguments. What I often do when proving an algorithm outputs an optimal solution is by taking any solution S and show that in some way, the solution produced by the algorithm is no worse than S.