**A** is commonly used as the symbol for magnetic vector potential, just as **B** is used for flux-density or “induction”, and **H** for magnetic field strength or magnetizing force. It is a vector with three components in the coordinate directions *x*, *y*, *z*, and this is generally written Ā or **A** = (*A*_{x}, *A*_{y}, *A*_{z}).

Now I’m well aware that these opening statements may discourage people from reading any further. It looks like the start of a mathematics lecture. But this is an *Engineer’s* diary, not a mathematician’s diary. So I would like to share what I have learned about vector potential as an engineer with limited skill in mathematics. This is a personal account. It covers a period of roughly 50 years. In that time I have picked up four or five fundamental notions about vector potential, all of which have proved themselves to be both useful and timeless — “timeless” in the sense that I feel myself to be dealing with fundamental laws of nature that are unchanging and therefore reliable.

Instead of trying to explain vector potential by dumbing it down, I would like to share my own discoveries and explain what they mean to me in practical terms. They are not in chronological order, but in the order in which they come to mind while I’m writing.

Today we are aware that advanced numerical analysis in electromagnetics is generally written in terms of **A**. Suppose you have a complex engineering problem which involves a large number of unknown values of the magnetic field at different points in space, within the envelope of your motor or your generator, let’s say. As in any problem in applied mathematics, the object is to find the unknowns. In practically all the advanced software tools that are available to accomplish this solution, the unknowns are expressed in terms of **A**. Given the wide use of such software, and the enormous importance of the results obtained with it, that fact alone is a powerful statement of the importance of **A**, whatever it is. If the experts are using it, it must be important.

Now we don’t pay the licence fee just to get a bunch of values of **A**. What we’re more interested in is *torque*, *flux-density*, *force*, *inductance*, and many other practical engineering parameters that become even more interesting when the field solution is evolving through time. This tells us (even without studying the mathematics) that all these useful quantities can be derived from **A**. If you will, A is the raw material which can be processed to produce all the other parameters.[1] In any field of creation that relies on a unique raw material, the value of that raw material is inherently high. And so it is with **A**.

It may seem curious that **A** does not appear in the common form of Maxwell’s equations, even though — to put it crudely — we buy our finite-element software to solve Maxwell’s equations. What’s going on? Is this fraud? Have we been taken for a ride? No, it’s just another aspect of what we’ve been contemplating, since the magnetic field parameters are derived from **A**. And of course they obey Maxwell’s equations (as long as the solution is correct). The term “potential” expresses this in a very general way: once we know **A**, we have the *potential* to derive the solutions for the other field variables.

This view of **A** as *basic raw material* for field analysis first made itself known to me when I was a young engineer in a power engineering company, trying to solve an eddy-current problem. For a long time as a student I struggled to understand why we needed *two* variables (**B** and **H**) to describe a magnetic field; and so** A** might easily have appeared as a third variable to add to my confusion — a little bit like phlogiston or æther. But somehow I was fascinated by the pristine independence and “nobility” of **A**, on which the “lower-order” field variables **B** and **H** depended for their existence. To work with **A** was to join the aesthetic inner circle of purists, except for the fact that my boss wanted engineering answers in terms of lengths and thicknesses and power losses. Sigh!

The eddy-currents in my problem were flowing in a cylinder, and they were excited by space-harmonics in the ampere-conductor distribution of the stator winding in a large machine. It was natural to work in cylindrical coordinates. Since a full 3D solution was out of the question, the “done thing” in those days was to assume infinite axial length and reduce the problem to a 2D problem, so that **A** retained only one coordinate component, *A*_{z} along the axial direction : **A** = (0, 0, *A*_{z}). In effect, the vector potential became a scalar potential — quite humiliating really, but like so many engineers before and since, I had to get some kind of solution even if it was approximate.[2]

I did get a solution,[3] but it was concerned only with the field variables (mainly flux-density). Perhaps because of that narrow focus, I remained ignorant of several other properties of **A** which in later life came to appear simpler, more practical, and more elegant. Let me try to describe some of them. They relate to *circuits*.

The first property is the idea that **A** is set up by a distribution of current-density **J**, such that in certain simple problems the value of **A** at a point can be calculated directly by a formula when the current-density distribution is known. I never did find an *engineering* problem where this bit of physics proved useful by itself, but I was always intrigued by the notion that **A** and **J** were parallel in certain well-defined problems, including some very practical ones. [4]

This led to the discovery that the scalar product **A•J** was related to stored field energy, and thence to the calculation of inductance — for example, slot-leakage inductance (see video 13 in the video series). Inductance is a *circuit* parameter — sometimes called a “lumped” circuit parameter because it “lumps” or “collects” all the inductive effects in a circuit and presents them at the terminals.

This is closely related to the fact that the line integral of **A** around a closed circuit is the *flux-linkage*, and the time-derivative of that flux-linkage is of course the generated EMF (by Faraday’s law). Many other aspects of electromagnetic energy and energy-conversion arise from these fundamentals, of course; but that line integral is one of the elegant timeless principles that underlie so much of what we do in electrical engineering. The line integral — the flux-linkage — is the single parameter that presents the effect of the entire magnetic field *at the terminals of a circuit*. It turns out that the ratio of flux-linkage and current is inductance.

We can say that flux-linkage is the link between the field solution and the circuit solution, and in this role it has enormous importance and practical value (too great to illustrate in detail here, but it runs through all the videos). The unit of **A** reflects this valuable property : [Vs/m] in precise practical engineering terms. So the unit of the line integral is the volt-second [Vs].

I’ve tried to avoid excessively mathematical language, and I haven’t used any equations (although I have referred to several which can be found in standard textbooks on electromagnetics). But that term “line integral” bothers me. How can it be explained in non-mathematical language? I imagined a railway line running all the way around Tokyo, forming a loop. I imagined myself walking the whole length of the line, counting sleepers (“ties” in American English). Would the number of sleepers be analogous to the flux-linkage? I’m afraid not. That number would only give me an idea of the length of the loop.

But suppose every sleeper was labelled with the value of the component of vector potential pointing in the direction of the line at that point. Then if I added up all these labelled values, each one multiplied by the spacing between adjacent sleepers, I would get a weighted sum that is analogous to flux-linkage. It would be useful only if I walked the *entire* loop, because flux-linkage is meaningful only in a closed circuit — or at least a circuit where the beginning and end are infinitesimally close together.[5] If I walked round the loop three times, I would get three times the flux-linkage (assuming the labels on the sleepers didn’t change).

Alas, I fear that this analogy may be too far-fetched, and unconvincing. Maybe the raw mathematics is easier. Or just calculate with a finite-element program!

**A**” is literally true in cases where the processing involves a derivative of

**A**, although the processing may equally involve integrals of

**A**. [2] See Lawrenson P.J. and Ralph M.C.,

*The general 3-dimensional solution of eddy-current and Laplacian fields in cylindrical structures*, Proc. I.E.E.,

**117**, 469−472 (1970); summarized in Binns K.J., Lawrenson P.J. and Trowbridge C.W.,

*The Analytical and Numerical Solution of Electric and Magnetic Fields*, John Wiley & Sons, Chichester, 1992. These references interestingly reflect the furthest advance of analytical methods at the time of the “dawn” of the age of computerized numerical analysis. [3] Miller T.J.E. and Lawrenson P.J.,

*Penetration of transient magnetic fields through conducting cylindrical structures, with particular reference to superconducting AC machines*. IEE Proceedings

**123**, 437-443, 1976. [4] See, for example, Kraus J.D.,

*Electromagnetics,*McGraw-Hill, New York, 1953; or Smythe W.R.,

*Static and Dynamic Electricity*, McGraw-Hill, New York, 1950. While the direct calculation of

**A**appears to be rare, the parallel formula

*Ampere’s Rule*(also known as the

*Biot-Savart formula*in association with straight conductors) has a great many practical applications. [5] This is a consequence of Faraday’s law, which applies strictly to

*closed*circuits, even though we have many habits of ignoring the leads and interconnectors and assuming that they don’t count.