[No. 87] The Case for Big Data

 

Fig. 1 Motor outline
Fig. 1 Motor outline

What is Big Data, and why do we need it?

Let’s try to answer this question with very little specialist knowledge of numerical analysis software.

Big Data refers to a computing environment in which all the numbers are big: the tens of millions of discrete elements in the mesh, the number of parallel processors, the very high clock speed, the gigabytes of memory, the terabytes of storage, and the days or weeks of processing time. But why?

Consider an electric machine with outline dimensions as shown in Fig. 1.  Suppose we want an accurate numerical analysis of all the physical processes going on inside it. Some of these processes involve variations in, say, flux-density or current-density over very short distances: for example, high-frequency skin effect in a conductor, or flux-density and magnetic field strength in a narrow saturated section of a rotor lamination, or thermal conduction over an uneven interface under compression.

Let’s imagine a 3D finite-element mesh capable of reproducing these variations throughout the entire volume of the machine. Using a uniform mesh with elements of the order of 0.1 mm in size, we would need something like 150/0.1 × 150/0.1 × 200/0.1 = 4,500,000,000 elements — that is, 4.5 × 109 or 4.5 billion elements. Considering that each element will have several bytes of data to describe the state of the magnetic field inside it, that’s an awful lot of data. We can see how a simple-minded approach to 3D numerical analysis isn’t practical — at least, not with most common computers. But the connection between accurate analysis and big data is rather obvious.

We cannot escape from the fact that high field gradients require high mesh density. The obvious strategy is therefore to grade the mesh, so that smaller elements are used in regions of high field gradient, while larger elements are used elsewhere. This is one of the oldest techniques used in finite-element analysis since the early days [1,2]. In early works, the mesh density was controlled by the distribution of boundary nodes, but the method has acquired powerful adaptive and automated enhancements that no longer require user-intervention at such a detailed level. In the most sophisticated methods, the mesh itself continually adapts its configuration as the solution progresses.

Even older is the simplification that arises in regions where the field gradient in one dimension is zero: for example, if there is no variation in the axial direction we have the familiar 2D model with a mesh confined to one plane. In the example in Fig. 1, this would reduce the mesh size to 150/0.1 × 150/0.1 = 2,250,000 or 2.25 million elements. Although this is a huge reduction in the problem size, by a factor of 2000, it is still a large mesh by most standards. Even if we have a computer that can handle it, we have to think about the solution time and other factors. Also, there may be no point in analysing a 2D model to an excessive degree of accuracy if it ignores important 3D effects.

Grading the mesh is therefore essential. If we assume that, let’s say, 1% of the volume of a machine requires a mesh size of 0.1 mm, and the rest 1.0 mm, the required number of elements (in millions) changes to

0.01 × 4500 + (1 × 0.05) × 4500/103 = 49.275

which is still a very large mesh. But if this is getting close to what many users would consider a to be practical upper limit, at least it gives an idea of the scale of the problem.

Of course, symmetry is another feature that may be very helpful. For example in a machine that can be considered symmetrical in the axial direction, the number of elements can be halved. If there is sufficient symmetry in the transverse cross-section, further reductions are sometimes possible by a factor of 2, 4, or more, depending on the slot/pole combination and the winding layout. These reductions are so valuable that they should be exploited wherever possible. Sophisticated software packages may even be able to recognize symmetries of this kind automatically, and implement them.

It is outside my scope to consider the algorithms used for mesh generation or solution, but it’s obvious that these functions are critical in the pursuit of quick, accurate solutions with numerical stability. I did say at the beginning, “with very little specialist knowledge of numerical analysis software”!

To me as an old-school engineer, the question seems to be, not Why do we need Big Data? , but How did we ever survive without it?


References

[1] Lowther D.A. and Silvester P.P., Computer-Aided Design in Magnetics, Springer-Verlag, 1986

[2] Reichert K., The Calculation of Magnetic Circuits with Permanent Magnets by Digital Computers, IEEE Transactions on Magnetics, Vol. MAG-6, No. 2, June 1970

PROFILE

Prof. Miller was educated at the universities of Glasgow and Leeds, U.K., and served an industrial apprenticeship with Tube Investments Ltd. He worked for G.E.C. in the U.K. and General Electric in the United States. From 1986-2011 he was professor of electric power engineering at the university of Glasgow, where he founded the Scottish Power Electronics and Electric Drives Consortium. He has published more than 200 papers and 10 books and 10 patents, and he has given many training courses. He has consulted for several industrial companies in Europe, Japan and the United States. He is a Life Fellow of I.E.E.E. and in 2008 he was awarded the Nikola Tesla award.

The Green Book: “Design of Brushless Permanent-Magnet Machines”

The Blue Book: “Design Studies in Electric Machines” (June 30, 2022)