Vector Formalism in Introductory Physics II: Six Coordinate-Free Derivations of the BAC-CAB Identity

TL;DR: The BAC-CAB vector identity is probably the most important vector identity, and has potentially important applications in introductory physics. I present six coordinate-free derivations of this identity. By “coordinate-free” I mean a derivation that doesn’t rely on any particular coordinate system, and one that relies on the inherent geometric relationships among the vectors involved.

I have been on a quest for a coordinate-free derivation of the ubiquitous BAC-CAB vector identity

\mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)=\left(\mathbf{A}\bullet\mathbf{C} \right)\mathbf{B} -\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C}

for a long time. (Incidentally, we tell students to remember it as “BAC-CAB” but we rarely, if ever, write it that way.) The usual derivations involve either expanding both sides in Cartesian components and showing they are equal or using the Levi-Civita symbol and index notation to derive the identity. The former is tedious; the latter is elegant but lacks geometry. In my extensive web searches I stumbled onto a beautiful coordinate-free derivation cast in the language of differential forms. While I understand parts of it, I want to eventually completely understand it because it’s more along the lines of what I thought I was originally looking for, something analytical with not as much emphasis on geometry as the present derivation. Then I changed my mind and decided I wanted something based as much as possible on geometry after all, and I finally found several instead of just one.

Remember that I’m writing this as I would explain it to introductory physics students, so I will try to emphasize fine points that may otherwise go unnoticed. I assume the reader has previously been introduced to dot products and cross products. In future posts, I will address how to introduce these two concepts and I decided not to do that here.

Derivation I

Derivation II

Derivation III

Derivation IV

Derivation V

Derivation VI

Derivation I: A Derivation Based on Index Notation

I didn’t invent this derivation, which mixes tensor index notation and traditional symbolic notation, the Levi-Civita symbol, and explicitly includes basis vectors, which are usually left out of such derivations. I include them here anticipating future posts. In my opinion, this derivation is best described in these notes by Ben-Yaacov and Roig. It is efficient, but it is not geometric in nature. In fact, I think it inherently hides the underlying geometry but I also think it’s a valuable derivation to know.

Recall that in index notation, vectors are represented as components (coefficients) multiplying basis vectors, summed over all such pairs. \mathbf{A}=A_i\mathbf{\widehat{e}}_i and \mathbf{B}=B_j\mathbf{\widehat{e}}_j. The dot product of these two vectors would be notated as \mathbf{A}\bullet\mathbf{B}=A_i B_i. The cross product of these two vectors would be notated as \mathbf{A}\times\mathbf{B}=\epsilon_{ijk}A_iB_j\mathbf{\widehat{e}}_k. The dot product of two orthonormal basis vectors would be notated as \mathbf{\widehat{e}}_i\bullet\mathbf{\widehat{e}}_j=\delta_{ij}. Finally, the cross product of two orthonormal basis vectors would be notated as \mathbf{\widehat{e}}_i\times\mathbf{\widehat{e}}_j=\epsilon_{ijk}\mathbf{\widehat{e}}_k. For the purposes of this post, I will assume the reader is already familiar with the Levi-Civita symbol and its properties, the Einstein summation convention, and other aspects of index notation.

Here is the derivation.

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(A_i\mathbf{e}_i\right) \times \left(\epsilon_{jkl}B_{j}C_{k}\mathbf{\widehat{e}}_{l}\right) && \text{(1)} \\ &=\epsilon_{jkl}A_{i}B_{j}C_{k}\left(\mathbf{\widehat{e}}_{i}\times\mathbf{\widehat{e}}_{l}\right) && \text{(2)} \\ &=\epsilon_{jkl}A_{i}B_{j}C_{k} \epsilon_{ilh}\mathbf{\widehat{e}}_{h} && \text{(3)} \\ &= \epsilon_{jkl}\epsilon_{hil}A_{i}B_{j}C_{k}\mathbf{\widehat{e}}_{h} && \text{(4)} \\ &= \left(\delta_{jh}\delta_{ki}-\delta_{ji}\delta_{kh}\right)A_{i}B_{j}C_{k}\mathbf{\widehat{e}}_{h} && \text{(5)} \\ &= \delta_{jh}\delta_{ki}A_{i}B_{j}C_{k}\mathbf{\widehat{e}}_{h}-\delta_{ji}\delta_{kh}A_{i}B_{j}C_{k}\mathbf{\widehat{e}}_{h} && \text{(6)} \\ &= \left(A_{i}C_{k}\delta_{ki}\right)\left(B_{j}\mathbf{\widehat{e}}_{h}\delta_{jh}\right)-\left(A_{i}B_{j}\delta_{ji}\right)\left(C_{k}\mathbf{\widehat{e}}_{h}\delta_{kh}\right) && \text{(7)} \\ &= \left(A_{i}C_{i}\right)\left(B_{j}\mathbf{\widehat{e}}_{j}\right)-\left(A_{i}B_{i}\right)\left(C_{k}\mathbf{\widehat{e}}_{k}\right) && \text{(8)} \\ &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(9)} \\ \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(10)} \end{aligned}

Here is a description for each step.

  1. Write the righthand side in index notation, showing only the innermost cross product written with the Levi-Civita symbol. Note that there is no free index in this expression. All the indices are dummy indices.
  2. Rearrange the righthand side to bring the Levi-Civita symbol to the leftmost position and the cross product of two basis vectors to the rightmost position. This is legal because each factor is simply a scalar, a real number, and thus the entire righthand side, excluding the remaining cross product, is commutative. Note that every index is repeated, and therefore is a dummy index.
  3. Rewrite the basis vector cross product in terms of a second Levi-Civita symbol. Pay particular attention to the names of the indices.
  4. Rearrange the righthand side to bring the second Levi-Civita symbol to the right of the first one. Keep the remaining basis vector as the end of the line.
  5. Rewrite the products of the two Levi-Civita symbols as the difference of two products of Kronecker deltas. The formal logic behind this step is very confusing, and a couple years ago I invented a way to do it quickly, a way that I have never seen in the literature. It is inspired by a strategy shown to algebra students for multiplying two binomials, the FOIL method. FOIL is an acronym for First, Outer, Inner, Last which indicates the combinations of terms to be multiplied and in what order. I understand that many mathematics instructors frown on the use of the FOIL method becuase it’s a shortcut that removes the underlying reasoning. Still, I will show my quick way and let the reader decide on its appropriateness. call this the FLOI method, a name I remember by thinking about “Floyd the barber” from my all time favorite TV program The Andy Griffith Show. The FLOI method calls for finding the dummy index in the Levi-Civita symbol product. Relative to that index, identify the First, Last, Outer, and Inner indices as in multiplying binomials in algebra. Now, for each of First, Last, Outer, Inner write a Kronecker delta with those corresponding indices, remembering to subtract the second two from the first two. There is where the - sign first appears.
  6. Use the distributive property to expand the righthand side.
  7. The next step is to reorder the factors in each term to allow the Kronecker deltas to do their job, which is to pick out an index that survives the underlying summation. However, there is a problem. How do we know which vector to associate with each Kronecker delta? Look carefully at the indices. To associate a component with a Kronecker delta, the component must share an index with the Kronecker delta. Otherwise, the Kronecker delta can’t do its job. Thus, the first Kronecker delta can be associated with B_{j} or \mathbf{\widehat{e}}_h; we choose the latter. The second Kronecker delta can therefore be associated either of the remaining components because the Kronecker delta’s action will automatically turn one component’s index into the other component’s index; we choose C_{k}. This is very cool! Either way, you’ll end up with a dummy index that will go away when we write the final result in vector notation. So the third Kronecker delta must be associated with A_{i} or B_{j}; we choose the latter. Finally, the fourth can be associated with with C_{k} or \mathbf{\widehat{e}}_h; we again choose the latter. I arbitrarily write each Kronecker delta to the immediate right of the factor on which it operates. This is merely a convention, one I have not seen addressed in the literature. Feel free to ignore it.As I was writing this, I realized another way to think about this step. The combination B_{j}\mathbf{\widehat{e}}_h\delta_{jh} is just the vector B_{j}\mathbf{\widehat{e}}_{j} or B_{h}\mathbf{\widehat{e}}_{h}. Of course it doesn’t matter which index you use because it’s a dummy index. The combination A_{i}C_{k}\delta_{ki} is the dot product A_{i}C_{i} or A_{k}C_{k}. Again, it doesn’t matter which index you use because it will end up being a dummy index. I added parentheses for clarity in associating factors.
  8. Let the Kronecker deltas do their job on the indices of the components immediately to the left of each delta. You end up with two dummy indices on each side of the subtraction sign. It just works. At first, there appears to be an error because you end up with one dummy index used twice on the righthand side, but there is no error because each use is restricted to one term. I added parentheses for clarity in associating factors.
  9. Rewrite the righthand side in full vector notation. Remember that two adjacent components with the same index constitute a dot product, and a component adjacent to a basis vector with the same index constitutes a vector. Also, in LaTeX I recommend using \bullet to indicate dot products rather than \cdot because the latter also indicates scalar multiplication of real numbers or variables that represent them algebraically. The parentheses aren’t required because the dot product is unambiguous, but they’re traditionally included.

In preparation for more formal work in which dummy indices must occur in upper-lower pairs, one could rewrite this derivation to make the vector components have upper (contravariant) indices and the basis vectors have lower (covariant) indices, with appropriate adjustments to the Levi-Civita symbols’ indices. I may modify this post to reflect that sometime in the future.

Derivation II: A Derivation Based on Geometry

I did not invent this derivation either. Indeed I found it here, which is part of a much larger online text. It is sufficiently clever that I think it should be more widely seen and I use to think it would be appropriate for an introductory physics courses (provided vectors are introduced more carefully than usual, with an extreme emphasis on coordinate-free geometry) but now I’m not so sure because I’ve run into a problem with it. Nevertheless, several aspects of the derivation pique my interest.

  • It relies on the fact that any vector can be resolved into components parallel to, and perpendicular to, another vector.
  • Given the parallel and perpendicular components relative to another vector, dot products and cross products can be thought of in a slightly different way that I’d never realized. \mathbf{B}\bullet\mathbf{A} can be written as \mathbf{B}\bullet\left(\mathbf{A}_\parallelto + \mathbf{A}_\perp\right) and finally as \mathbf{B}\bullet\mathbf{A}_\parallelto because the perpendicular component doesn’t survive the dot product. Similarly, \mathbf{B}\times\mathbf{A} can be written as \mathbf{B}\times\left(\mathbf{A}_\parallelto + \mathbf{A}_\perp\right) and finally as \mathbf{B}\times\mathbf{A}_\perp because the parallel component doesn’t survive the cross product. These truths are so obvious that I don’t recall noticing them before now, and that bothers me.
  • I exploit the fact that a vector can be “factored” into a magnitude and direction: \mathbf{A} = \left\lVert\mathbf{A}\right\rVert\widehat{\mathbf{A}}. This isn’t really as amazing as it may seem, because it’s (almost) the same thing as saying that the vector can be expressed as the sum of products of corresponding components and basis vectors. Nevertheless, I emphasize this property here because it prevents the misunderstanding that kept me from being able to reproduce this proof as I describe below.
  • Almost all of the derivation takes place in a plane and is easy to visualize.
  • Note the author refers to \mathbf{A}\times\left( \mathbf{B}\times\mathbf{C} \right) as a double cross product rather than the usual triple cross product. This an entirely intuitive name because there are two cross products involved, not three. There are indeed three operands, and I get why that is used to name the quantity. Still, I prefer the new term even thought I’ve been admonished before that I’m not allowed to invent new and better names without “the community’s permission.” Well, if I appeal to the elitism from which my “warning” came, then I in turn appeal to it again in adopting the same terminology as that used at one of the most elite physics schools on the planet. *Charles Emerson Winchester smirk*

Because this derivation is inherently geometric, I think it is best presented operationally, as a sequence of steps that can be carried out either on paper or better yet in VPython or GlowScript. I will come back and add links to either a GlowScript or Trinket app that illustrates the derivation.

Here is the derivation, which assumes no two vectors are collinear. As in the previous derivation, I will show the mathematical steps first and then the corresponding description. For clarity, I show many more intermediate steps than the original source shows.

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \mathbf{A^\prime}\times\left(\mathbf{B}\times\left(\mathbf{C}_{\parallelto \mathbf{B}}+\mathbf{C}_{\perp\mathbf{B}}\right)\right) && \text{(1a)} \\ &= \mathbf{A^\prime}\times\left(\mathbf{B}\times\mathbf{C}_{\parallelto \mathbf{B}}+\mathbf{B}\times\mathbf{C}_{\perp\mathbf{B}}\right) && \text{(1b)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\left\lVert\mathbf{B}\right\rVert\widehat{\mathbf{B}}\times\left\lVert\mathbf{C}_{\parallelto\mathbf{B}}\right\rVert\widehat{\mathbf{C}}_{\parallelto\mathbf{B}}+\left\lVert\mathbf{B}\right\rVert\widehat{\mathbf{B}}\times\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(1c)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\left\lVert\mathbf{B}\right\rVert\widehat{\mathbf{B}}\times\left\lVert\mathbf{C}_{\parallelto\mathbf{B}}\right\rVert\widehat{\mathbf{B}}+\left\lVert\mathbf{B}\right\rVert\widehat{\mathbf{B}}\times\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(1d)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\left\lVert\mathbf{B}\right\rVert\widehat{\mathbf{B}}\times\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(1e)} \\ &= \underbrace{\left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert}_{\text{magnitude}}\underbrace{\widehat{\mathbf{A}}^\prime\times\left(\widehat{\mathbf{B}}\times\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right)}_{\text{direction}} && \text{(1f)} \end{aligned}

\begin{aligned} \mathbf{A^\prime} &= \left\lVert\mathbf{A^\prime}\right\rVert\widehat{\mathbf{A}}^\prime && \text{(2a)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\left(\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{B}}\right)\widehat{\mathbf{B}}+\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right)\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(2b)} \end{aligned}

\begin{aligned} \mathbf{A}_{\perp\mathbf{A^\prime}} &= \left\lVert\mathbf{A^\prime}\right\rVert\widehat{\mathbf{A}}_{\perp\mathbf{A^\prime}} && \text{(3a)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\left(\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right)\widehat{\mathbf{B}}-\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{B}}\right)\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(3b)} \end{aligned}

\begin{aligned} \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\widehat{\mathbf{B}}\times\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) &= \left\lVert\mathbf{A}_{\perp\mathbf{A^\prime}}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{A}}_{\perp\mathbf{A^\prime}} && \text{(4a)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{A}}_{\perp\mathbf{A^\prime}} && \text{(4b)} \\ &= \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert \mathbf{C}_{\perp\mathbf{B}}\right\rVert  \\ &\quad \cdot \left(\left(\widehat{\mathbf{A}}^\prime\bullet \widehat{\mathbf{C}}_{\perp\mathbf{B}}\right)\widehat{\mathbf{B}} - \left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{B}}\right)\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) && \text{(4c)} \\ &= \left(\mathbf{A^\prime}\bullet\mathbf{C}_{\perp\mathbf{B}}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C}_{\perp\mathbf{B}} && \text{(4d)} \end{aligned}

\begin{aligned} \left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{B}}\right)\widehat{\mathbf{C}}_{\parallelto\mathbf{B}} &= \left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{C}}_{\parallelto\mathbf{B}}\right)\widehat{\mathbf{B}} && \text{(5a)} \\ \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\parallelto\mathbf{B}}\right\rVert\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{B}}\right)\widehat{\mathbf{C}}_{\parallelt0\mathbf{B}} &= \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\parallelto\mathbf{B}}\right\rVert\left(\widehat{\mathbf{A}}^\prime\bullet\widehat{\mathbf{C}}_{\parallelto\mathbf{B}}\right)\widehat{\mathbf{B}} && \text{(5b)} \\ \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C}_{\parallelto\mathbf{B}} &= \left(\mathbf{A^\prime}\bullet\mathbf{C}_{\parallelto\mathbf{B}}\right)\mathbf{B} && \text{(5c)} \\ 0 &= \left(\mathbf{A^\prime}\bullet\mathbf{C}_{\parallelto\mathbf{B}}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C}_{\parallelto\mathbf{B}} && \text{(5d)} \end{aligned}

\begin{aligned} \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\widehat{\mathbf{B}}\times\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) &= \left(\mathbf{A^\prime}\bullet\mathbf{C}_{\perp\mathbf{B}}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C}_{\perp\mathbf{B}} \\ &\quad + \left(\mathbf{A^\prime}\bullet\mathbf{C}_{\parallelto\mathbf{B}}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C}_{\parallelto\mathbf{B}} && \text{(6a)} \\ &= \left(\mathbf{A^\prime}\bullet\left(\mathbf{C}_{\perp\mathbf{B}}+\mathbf{C}_{\parallelto\mathbf{B}}\right)\right)\mathbf{B} \\ &\quad -\left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\left(\mathbf{C}_{\perp\mathbf{B}}+\mathbf{C}_{\parallelto\mathbf{B}}\right) && \text{(6b)} \\ &= \left(\mathbf{A^\prime}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C} && \text{(6c)} \\ \end{aligned}

\begin{aligned} \left\lVert\mathbf{A^\prime}\right\rVert\left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}_{\perp\mathbf{B}}\right\rVert\widehat{\mathbf{A}}^\prime\times\left(\widehat{\mathbf{B}}\times\widehat{\mathbf{C}}_{\perp\mathbf{B}}\right) &= \left(\mathbf{A^\prime}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C} && \text{(7a)} \\ \mathbf{A^\prime}\times\left(\mathbf{B}\times\mathbf{C}_{\perp\mathbf{B}}\right) &= \left(\mathbf{A^\prime}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A^\prime}\bullet\mathbf{B}\right)\mathbf{C} && \text{(7b)} \\ \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(7c)} \end{aligned}

  1. Let \mathbf{A^\prime} be the projection of \mathbf{A} onto the plane containing \mathbf{B} and \mathbf{C}. Let \mathbf{C}_{\parallelto\mathbf{B}} and \mathbf{C}_{\perp\mathbf{B}} be vector components of \mathbf{C} relative to \mathbf{B}. Substitute \mathbf{A^\prime} for \mathbf{A} and \mathbf{C}_{\parallelto\mathbf{B}}+\mathbf{C}_{\perp\mathbf{B}} for \mathbf{C} into the original expression. Then “factor” each vector into a magnitude and direction. Simplify the resulting expression, noting that it is “naturally” factored into a magnitude (scalar product of three magnitudes actually) and a direction (double cross product of three directions actually).
  2. Resolve the direction of \mathbf{A^\prime} into vector components along \widehat{\mathbf{B}} and \widehat{\mathbf{C}}_{\perp\mathbf{B}}.
  3. Construct a vector orthogonal to \mathbf{A^\prime} using the same components from the previous step by interchanging them and negating one of them. Geometry dictates which one to negate. Draw a diagram! This is where the negative is first introduced. Note that the magnitude of this newly constructed vector is the same as that of \mathbf{A^\prime}. Geometrically, we’re merely rotating \mathbf{A^\prime} by ninety degrees in the appropriate direction.
  4. In the final expression from step (1), replace the direction of \mathbf{A^\prime} with that of \mathbf{A}_{\perp\mathbf{A^\prime}}. You should now be able to distribute the magnitudes on the righthand side to get an expression in terms of vectors rather than magnitudes and directions.
  5. Exploit the fact that \mathbf{B} and \mathbf{C}_{\parallelto\mathbf{B}} have the same direction, and therefore the projection of \mathbf{A^\prime} onto either will be the same, and therefore their difference will be zero. Note the order of the resulting subtraction, foreshadowing the final result.
  6. Add the final results of steps (4) and (5), which is nothing more than adding zero to both sides. Adding zero changes nothing algebraically, but in this case it allows for some algebra to take place.
  7. Distribute the magnitudes on the lefthand side. Replace every occurance of \mathbf{A^\prime} with \mathbf{A} on both sides and \mathbf{C}_{\perp\mathbf{B}} with \mathbf{C} on the lefthand side, remembering that this doesn’t change the outcome as proven in step (1). The identity is proven.

This derivation gave me a lot of difficulty initially. For some reason, I got it into my head that \mathbf{A}_{\perp\mathbf{A^\prime}} had both the same magnitude and direction as \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) when actually the two only have the same direction. I don’t know how many hours I wasted trying to reconcile the resulting discrepancies before I saw the error in my reasoning.

Derivation III: Another Derivation Based on Geometry

This derivation is also geometric in nature, but conceptually simpler than the previous derivation. It is slightly more algebraic than geometric though. I found it on pages 19 and 20 of this excellent textbook. In my printing, there is a typo in equation (8). \alpha should be \lambda. This derivation rotates one of the vectors to exploit some geometry, and also exploits the properties of the mixed product (aka triple scalar product, an illogical name if ever there were one). This derivation could be modified to project \mathbf{A} onto the plane containing the other two vectors, as in the previous derivation, but the authors chose not to do so. It is again assumed that no two vectors are collinear.

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \lambda\mathbf{B}+\mu\mathbf{C} && \text{(1)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\bullet\mathbf{C^*} &= \lambda\mathbf{B}\bullet\mathbf{C^*} + \mu\mathbf{C}\bullet\mathbf{C^*} && \text{(2a)} \\ &= \lambda\mathbf{B}\bullet\mathbf{C^*} && \text{(2b)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\bullet\mathbf{C^*} &= \left(\mathbf{B}\times\mathbf{C}\right)\times\mathbf{C^*}\bullet\mathbf{A} && \text{(3a)} \\ \left\lVert\left(\mathbf{B}\times\mathbf{C}\right)\times\mathbf{C^*}\right\rVert &= \left\lVert\mathbf{B}\times\mathbf{C}\right\rVert\left\lVert\mathbf{C^*}\right\rVert\sin\frac{\pi}{2} && \text{(3b)} \\ &= \left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C}\right\rVert\sin\theta_{\mathbf{B},\mathbf{C}}\left\lVert\mathbf{C^*}\right\rVert && \text{(3c)} \\ &= \left\lVert\mathbf{B}\right\rVert\left\lVert\mathbf{C^*}\right\rVert\cos\theta_{\mathbf{B},\mathbf{C^*}}\left\lVert\mathbf{C}\right\rVert && \text{(3d)} \\ &= \left(\mathbf{B}\bullet\mathbf{C^*}\right)\left\lVert\mathbf{C}\right\rVert && \text{(3e)} \\ \therefore \left(\mathbf{B}\times\mathbf{C}\right)\times\mathbf{C^*} &= \left(\mathbf{B}\bullet\mathbf{C^*}\right)\mathbf{C} && \text{(3f)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\bullet\mathbf{C^*} &= \left(\mathbf{B}\bullet\mathbf{C^*}\right)\left(\mathbf{C}\bullet\mathbf{A}\right) && \text{(4)} \end{aligned}

\begin{aligned} \lambda\mathbf{B}\bullet\mathbf{C^*} &= \left(\mathbf{B}\bullet\mathbf{C^*}\right)\left(\mathbf{C}\bullet\mathbf{A}\right) && \text{(5a)} \\ \therefore \lambda &= \mathbf{A}\bullet\mathbf{C} && \textbf{(5b)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\bullet\mathbf{A} &= \lambda\mathbf{B}\bullet\mathbf{A}+\mu\mathbf{C}\bullet\mathbf{A} && \text{(6a)} \\ 0 &=\lambda\mathbf{B}\bullet\mathbf{A}+\mu\mathbf{C}\bullet\mathbf{A} && \text{(6b)} \\ \mu &= -\lambda\frac{\mathbf{B}\bullet\mathbf{A}}{\mathbf{C}\bullet\mathbf{A}} && \text{(6c)} \\ &= -\mathbf{A}\bullet\mathbf{C}\frac{\mathbf{B}\bullet\mathbf{A}}{\mathbf{C}\bullet\mathbf{A}} && \text{(6d)} \\ &= -\mathbf{B}\bullet\mathbf{A} && \text{(6e)} \\ \end{aligned}

\begin{aligned} \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(7)} \end{aligned}

  1. Reason that the final result must be a linear combination of \mathbf{B} and \mathbf{C}.
  2. Rotate \mathbf{C} by \pi/2 clockwise in the plane containing \mathbf{B} and \mathbf{C} and call the resulting vector \mathbf{C^*}. Dot each side with \mathbf{C^*}.
  3. Exploit the cyclic property of the mixed product, and show that the intermediate quantity (also incidentally a double cross product) involving \mathbf{C^*} can be written as a multiple of \mathbf{C}. Notice the geometry exploited in steps (3c) and (3d). That’s the utility of using \mathbf{C^*}.
  4. Dot both sides of (3f) with \mathbf{A}.
  5. Compare (2b) and (4) to solve for \lambda.
  6. Dot both sides of (1) with \mathbf{A}, exploit the properties of the mixed product, substitute for \lambda, and solve for \mu.
  7. Substitute into (1) and the identity is proven.

Derivation IV: A Straightforward Algebraic Derivation

This derivation is important because it is found in the definitive vector analysis textbook, that of Wilson, which is based on Gibbs’ notes. Consisting of two parts, the first part establishes an identity used in the second part while the second part calculates the result as a linear combination of \mathbf{B}, \mathbf{C}, and \mathbf{B}\times\mathbf{C}.

To begin the first part, resolve \mathbf{B} into a component \mathbf{B^\prime} parallel to \mathbf{A} and a component \mathbf{B^{\prime\prime}} perpendicular to \mathbf{A}. Let \theta be the angle between \mathbf{A} (or \mathbf{B^\prime}) and \mathbf{B}.

\begin{aligned} \mathbf{A}\times\left(\mathbf{A}\times\mathbf{B}\right) &= -c\mathbf{B^{\prime\prime}} && \text{(1a)} \\ &= -\left\lVert\mathbf{A}\right\rVert \left\lVert\mathbf{A}\times\mathbf{B}\right\rVert\sin{\frac{\pi}{2}}\mathbf{\widehat{B}}^{\prime\prime} && \text{(1b)} \\ &= -\left\lVert\mathbf{A}\right\rVert \left\lVert\mathbf{A}\right\rVert \left\lVert\mathbf{B}\right\rVert \sin{\frac{\pi}{2}} \sin\theta \mathbf{\widehat{B}}^{\prime\prime} && \text{(1c)} \\ &= -\left\lVert\mathbf{A}\right\rVert^2 \left\lVert\mathbf{B}\right\rVert \sin\theta \mathbf{\widehat{B}}^{\prime\prime} && \text{(1d)} \\ &= -\left\lVert\mathbf{A}\right\rVert^2 \left\lVert\mathbf{B^{\prime\prime}}\right\rVert \mathbf{\widehat{B}}^{\prime\prime} && \text{(1e)} \\ &= -\left\lVert\mathbf{A}\right\rVert^2 \mathbf{B^{\prime\prime}} && \text{(1f)} \\ \therefore c &= \left\lVert\mathbf{A}\right\rVert^2 && \text{(1g)} \\ &= \mathbf{A}\bullet\mathbf{A} && \text{(1h)} \end{aligned}

\begin{aligned} \therefore \mathbf{B^{\prime\prime}} &= -\dfrac{\mathbf{A}\times\left(\mathbf{A}\times\mathbf{B}\right)}{\mathbf{A}\bullet\mathbf{A}} && \textbf{(2)} \end{aligned}

\begin{aligned} \mathbf{B} &= \mathbf{B^\prime} + \mathbf{B^{\prime\prime}} && \text{(3a)} \\ &= \dfrac{\mathbf{A}\bullet\mathbf{B}}{\mathbf{A}\bullet\mathbf{A}}\mathbf{A}-\dfrac{\mathbf{A}\times\left(\mathbf{A}\times\mathbf{B}\right)}{\mathbf{A}\bullet\mathbf{A}} && \text{(3b)} \\ \left(\mathbf{A}\bullet\mathbf{A}\right)\mathbf{B} &= \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{A} - \mathbf{A}\times\left(\mathbf{A}\times\mathbf{B}\right) && \text{(3c)} \\ \therefore \mathbf{A}\times\left(\mathbf{A}\times\mathbf{B}\right) &= \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{A}-\left(\mathbf{A}\bullet\mathbf{A}\right)\mathbf{B} && \text{(3d)} \end{aligned}

Now for the second part.

\begin{aligned} \mathbf{A} &= b\mathbf{B} + c\mathbf{C} + a\left(\mathbf{B}\times\mathbf{C}\right) && \text{(4a)} \\ \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= b\mathbf{B}\times\left(\mathbf{B}\times\mathbf{C}\right) + c\mathbf{C}\times\left(\mathbf{B}\times\mathbf{C}\right) + a\left(\mathbf{B}\times\mathbf{C}\right)\times\left(\mathbf{B}\times\mathbf{C}\right) && \text{(4b)} \\ &= b\mathbf{B}\times\left(\mathbf{B}\times\mathbf{C}\right) + c\mathbf{C}\times\left(\mathbf{B}\times\mathbf{C}\right) && \text{(4c)} \\ &= b\left(\left(\mathbf{B}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{B}\bullet\mathbf{B}\right)\mathbf{C} \right)-c\left(\left(\mathbf{C}\bullet\mathbf{B}\right)\mathbf{C}-\left(\mathbf{C}\bullet\mathbf{C}\right)\mathbf{B} \right) && \text{(4d)} \\ &= \left(b\mathbf{B}\bullet\mathbf{C}+c\mathbf{C}\bullet\mathbf{C} \right)\mathbf{B} - \left(b\mathbf{B}\bullet\mathbf{B}+c\mathbf{C}\bullet\mathbf{B} \right)\mathbf{C} && \text{(4e)} \end{aligned}

\begin{aligned} \mathbf{A}\bullet\mathbf{B} &= b\mathbf{B}\bullet\mathbf{B} + c\mathbf{C}\bullet\mathbf{B} + \underbrace{a\mathbf{B}\times\mathbf{C}\bullet\mathbf{B}}_{0} && \text{(5a)} \\ \mathbf{A}\bullet\mathbf{B} &= b\mathbf{B}\bullet\mathbf{B} + c\mathbf{C}\bullet\mathbf{B} && \text{(5b)} \\ \mathbf{A}\bullet\mathbf{C} &= b\mathbf{B}\bullet\mathbf{C} + c\mathbf{C}\bullet\mathbf{C} + \underbrace{a\mathbf{B}\times\mathbf{C}\bullet\mathbf{C}}_{0} && \text{(5c)} \\ \mathbf{A}\bullet\mathbf{C} &= b\mathbf{B}\bullet\mathbf{C} + c\mathbf{C}\bullet\mathbf{C} && \text{(5d)} \end{aligned}

\begin{aligned} \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(6)} \end{aligned}

Derivation V: Gibbs’ Other Derivation

This derivation, also found in Wilson, is also due to Gibbs. It is very similar to the next deriation but I include it here for completeness. An essentially identical version is found on pages 7 and 8 of Tai’s excellent book.

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= x\mathbf{B} + y\mathbf{C} && \text{(1a)} \\ \mathbf{A}\bullet\left(\mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\right) &= x\mathbf{A}\bullet\mathbf{B} + y\mathbf{A}\bullet\mathbf{C} && \text{(1b)} \\ &= 0 && \text{(1c)} \\ x\mathbf{A}\bullet\mathbf{B} &= -y\mathbf{A}\bullet\mathbf{C} && \text{(1d)} \\ x &= -y\dfrac{\mathbf{A}\bullet\mathbf{C}}{\mathbf{A}\bullet\mathbf{B}} && \text{(1e)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= -y\dfrac{\mathbf{A}\bullet\mathbf{C}}{\mathbf{A}\bullet\mathbf{B}}\mathbf{B} + y\mathbf{C} && \text{(2a)} \\ &= -y\dfrac{1}{\mathbf{A}\bullet\mathbf{B}}\left(\left(\mathbf{A}\bullet\mathbf{C}\right) \mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} \right) && \text{(2b)} \\ &= \hphantom{-} n \left(\left(\mathbf{A}\bullet\mathbf{C}\right) \mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} \right) && \text{(2c)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\bullet\mathbf{B} &= \hphantom{-}\mathbf{A}\bullet\left(\mathbf{B}\times\mathbf{C}\right)\times\mathbf{B} && \text{(3a)} \\ &= -\mathbf{A}\bullet\left(\mathbf{B}\times\left(\mathbf{B}\times\mathbf{C}\right)\right) && \text{(3b)} \\ &= -\mathbf{A}\bullet\left(\left(\mathbf{B}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{B}\bullet\mathbf{B}\right)\mathbf{C}\right) && \text{(3c)} \\ &= -\left(\mathbf{B}\bullet\mathbf{C}\right) \left(\mathbf{A}\bullet\mathbf{B}\right) + \left(\mathbf{B}\bullet\mathbf{B}\right) \left(\mathbf{A}\bullet\mathbf{C}\right) && \text{(3d)} \\ &= \hphantom{-}\left(\mathbf{A}\bullet\mathbf{C}\right) \left(\mathbf{B}\bullet\mathbf{B}\right) - \left(\mathbf{A}\bullet\mathbf{B}\right) \left(\mathbf{C}\bullet\mathbf{B}\right) && \text{(3e)} \end{aligned}

\begin{aligned} \therefore n &= 1 && \textbf{(4)} \end{aligned}

\begin{aligned} \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(5)} \end{aligned}

Derivation VI: A Derivation Based on Linearity

This is the slickest derivation, but certainly but not the least geometric. It is coordinate-free and framed in the spirit of MTW’s approach and for that reason I think this should be the first derivation students see. It requires a slightly different approach to vectors, as you will see. Although it closely resembles the previous derivation, I will repeat those steps here using Tai’s notation.

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \alpha\mathbf{B} + \beta\mathbf{C} && \text{(1a)} \\ \mathbf{A}\bullet\left(\mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)\right) &= \alpha\mathbf{A}\bullet\mathbf{B} + \beta\mathbf{A}\bullet\mathbf{C} && \text{(1b)} \\ &= 0 && \text{(1c)} \\ \alpha\mathbf{A}\bullet\mathbf{B} &= -\beta\mathbf{A}\bullet\mathbf{C} && \text{(1d)} \\ \beta &= -\alpha\dfrac{\mathbf{A}\bullet\mathbf{B}}{\mathbf{A}\bullet\mathbf{C}} && \text{(1e)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \alpha\mathbf{B} - \alpha\dfrac{\mathbf{A}\bullet\mathbf{B}}{\mathbf{A}\bullet\mathbf{C}}\mathbf{C} && \text{(2a)} \\ &= \alpha\left[ \mathbf{B} - \dfrac{\mathbf{A}\bullet\mathbf{B}}{\mathbf{A}\bullet\mathbf{C}}\mathbf{C}\right] && \text{(2b)} \\ &= \dfrac{\alpha}{\mathbf{A}\bullet\mathbf{C}}\left[ \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C}\right] && \text{(2c)} \\ &= \lambda\left[ \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C}\right] && \text{(2d)} \end{aligned}

\begin{aligned} \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right)&= \lambda\left[ \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C}\right] && \text{(3a)} \\ \mathbf{B}\times\left(\mathbf{B}\times\mathbf{C}\right)&= \lambda\left[ \left(\mathbf{B}\bullet\mathbf{C}\right)\mathbf{B} - \left(\mathbf{B}\bullet\mathbf{B}\right)\mathbf{C}\right] && \text{(3b)} \\ -\left\lVert\mathbf{B}\right\rVert^2 \left\lVert\mathbf{C}\right\rVert\widehat{\mathbf{C}} &= \lambda\left[-\left\lVert\mathbf{B}\right\rVert^2 \left\lVert\mathbf{C}\right\rVert\widehat{\mathbf{C}}\right] && \text{(3c)} \\ \end{aligned}

\begin{aligned} \therefore \lambda &= 1 && \text{(4)} \end{aligned}

\begin{aligned} \therefore \mathbf{A}\times\left(\mathbf{B}\times\mathbf{C}\right) &= \left(\mathbf{A}\bullet\mathbf{C}\right)\mathbf{B}-\left(\mathbf{A}\bullet\mathbf{B}\right)\mathbf{C} && \text{(5)} \end{aligned}

  1. Begin by assuming the result is a linear combination of \mathbf{B} and \mathbf{C}. Dot each side with \mathbf{A}, and treat the lefthand side as a triple scalar product. It must be zero because it contains one vector twice. Therefore, the righthand side must also be zero. Now we can relate the two coefficients. I arbitrarily chose to solve for \beta for no particular reason. You could also solve for \alpha.
  2. Substitute the expression for \beta into the original equation. Factor out both \alpha and \dfrac{1}{\mathbf{A}\bullet\mathbf{C}} and combine them into one constant, \lambda.
  3. Here is where the most interesting part of the derivation happens. The double cross product on the lefthand side is linear in each vector argument. The dot products and scalar products on the righthand side are also linear in each vector argument. This is nothing more than a formal way of saying that if we replace any one of \mathbf{A}, \mathbf{B}, or \mathbf{C} by \lambda\mathbf{A}, \lambda\mathbf{B}, or \lambda\mathbf{C} where \lambda is a scalar (real number for our purposes) the entire expression is also scaled by that same \lambda. The fact that we get the same scaled expression regardless of which vector we replace implies that the entire expression cannot depend on our choices for \mathbf{A}, \mathbf{B}, and \mathbf{C}. Thus, to evaluate \lambda we can use any vectors we want in the lefthand and righthand sides. Let’s make things simple for us, but let’s also not choose a coordinate system. Let’s temporarily assume that \mathbf{A}, \mathbf{B}, and \mathbf{C} are mutually orthogonal. Let’s also let \mathbf{A}=\mathbf{B}. Now we can evaluate both the lefthand and righthand sides for these arbitrarily, but also strategically, chosen input vectors.
  4. The result from the previous step allows us to deduce that \lambda = 1.
  5. The identity is proven.

Writing this post took the better part of four months, mostly because in the process of recreating the various derivations I stumbled onto some very interesting problems. In a corollary post to this one I will highlight one or more of these problems. I have looked at this post so many times that I probably have some existing typos and maybe even some notation errors. Just let me know if you find any and I will fix them. I also need to add annotations for derivations IV and V. I just really want to get this post published since it’s taken so long.

As always, feedback and comments are welcome.

Comments 2

  • OK, you’ve inspired me to go back to this one again! I’ve written down a treatment based on infinitesimal rotations—email me if you’d like to see it. Nothing really original, but different from the six you’ve provided here, and very geometric/physical.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.