No video

Lecture 4 (CEM) -- Transfer Matrix Method

  Рет қаралды 40,088

EMPossible

EMPossible

Күн бұрын

This method introduces the simple 1D transfer matrix method. It starts with Maxwell's equations and steps the student up to the equation for the transfer matrix and how to calculate the global transfer matrix. Reasons are then explained for why the method is inherently unstable.
Prerequisite Lectures: 2 and 3

Пікірлер: 60
@jman2oo2
@jman2oo2 7 жыл бұрын
This was beautiful to watch. So clear :')
@empossible1577
@empossible1577 7 жыл бұрын
Thank you!! BTW, I have made some revisions/corrections/additions to the course notes. You can get the latest from the course website here: emlab.utep.edu/ee5390cem.htm
@drillsargentadog
@drillsargentadog 2 жыл бұрын
The fact at 20:57 is just a simple consequence of the Jordan canonical form (you give only the case where algebraic and geometric multiplicities are equal so all Jordan blocks are 1x1). The matrix exponential is defined by its power series: e^A = I + A + A^2/2! + ..., so when you have the Jordan form A = USU^T, with U unitary, you get e^A = I + USU^T + USU^TUSU^T/2! + ... = I + USU^T + US^2U^T/2! + ... = Ue^SU^T.
@samuellyche3134
@samuellyche3134 8 жыл бұрын
For math stuff on why f(A) = W*f(lambda)*inv(W), see "similar matrices" and Jordan canonical form
@AnubhabHaldarChronum
@AnubhabHaldarChronum 8 жыл бұрын
Prof Raymond: I do not know if this has been stated before, or if I'm misunderstanding the words, but when a matrix is represented as PDP where D is a diagonal matrix, this particular process is known as diagonalization. The entire topic is close to similarity transformations, which is what you may have been referring to. (Re: ~20th minute)
@empossible1577
@empossible1577 8 жыл бұрын
Yes this is a diagonalization, but I have not been able to find where it is stated that this can be done for any function f(). Perhaps there are limitations that I am not aware of. I will admit that I have not looked very hard into this.
@podolsky81
@podolsky81 9 жыл бұрын
Hello Professor Raymond, I think there's a subtle mistake in the sign convention. When you replaced the d/dt operator in Maxwell's equations, you assumed an exp(i*omega*t) dependence and hence replaced d/dt by i*omega. Once this is done, when your plane wave is assumed to have an exp(i*k*x) dependence, this automatically means it's traveling in the -x direction. If you wanted your exp(i*k*x) wave to travel in the + x direction, you should have changed the time dependence to exp(-i*omega*t). Now, a problem arises when you get to the matrix differential equation, let's call it d(psi)/dz = A * psi, where A is the matrix you call capital omega. Whether you assumed an exp(i*k*x) dependence or exp(-i*k*x) dependence, you're going to arrive at the same matrix A. In doing this, the equations are blind to the fact as to whether the wave is traveling in the +x or - x direction as it should because the solution will have forward and backward travelling waves anyway. (Also mathematically, kx and ky occur squared or as kx*ky and hence flipping both signs will not change the matrix). So, in essence what you arrive at in your slides is the matrix equation assuming that exp(-i*k*x) means propagation on the +x direction. A subtle thing occurs when you try to sort your modes using the Poynting vector in the example you provided. Because now the direction of propagation is reversed, you find that the Poynting vector of a mode points in +z while the eigenvalue says it should be -z. Here I think we're under the mercy of the eigensolver because if we multiply the eigenvector by a constant, it's still an eigenvector. Hence to fix the apparent contradiction, I find that if we multiply the eigenvectors all by complex i, the Poynting vectors and the eigenvalues match the desired direction of propagation. Probably what needs to be done is to calculate the Poynting vector and then use the eigenvalue to find out if it implies the same direction of propagation or not. If not, we need to multiply the Poynting vector by -1 and hence multiply E by i and H by i to fix this before proceeding. Let me know what you think. I also really appreciate the effort of recording the lectures and putting them here; please keep them coming :) as I use them to teach myself computational E and M. Cheers and thank you very much for making these lectures public.
@empossible1577
@empossible1577 8 жыл бұрын
+George I definitely think I could get more sophisticated with the mode sorting and I think you are on to something. It would be great if you could find an example where the easy mode sorting fails, but the modified Poynting vector approach does not. Very good observation!
@ozzyfromspace
@ozzyfromspace 3 жыл бұрын
@21:00 another way to go about this is... e^x = x + x^2/2! + x^3/3! + ... and set x to your matrix. You should get the same answer. Not that I've ever needed to take e to the power of a matrix, but yeah, that's how you could do it. I saw it in an MIT linear algebra class sometime last year. Another advantage of this approach is that you can use SVD on the matrices in the expansion to get an exact solution. Very cool shortcut in my view. Awesome lecture btw, I'll probably never need this stuff, but it's far better than soaking up endless hours of PUBG and Fortnite 😅. Yes, I'm a gamer 😂👽☮️
@empossible1577
@empossible1577 3 жыл бұрын
Most of my students are gamers. I even started making some homework problems in my classes game themed. LOL
@empossible1577
@empossible1577 3 жыл бұрын
BTW, you are looking at a lot of old material. My best stuff is the latest stuff that you can get directly from the course websites. You might also like that graphic visualizations I post on LinkedIn from time to time. empossible.net/academics/
@ozzyfromspace
@ozzyfromspace 3 жыл бұрын
@@empossible1577 a man of cuture I see, I'm sure the gamer students appreciate it haha
@ozzyfromspace
@ozzyfromspace 3 жыл бұрын
@@empossible1577 thanks for pointing me to your awesome resources -- I need all the help I can get lol! And congratulations on being named to the Florida Tech Career Hall of Fame! 🎊 I'll definitely follow your LinkedIn 🙌🏽
@Pepek4896
@Pepek4896 8 жыл бұрын
Hi Professor Raymond, I was trying to solve for anisotropic case using the TMM taught in this lecture, and after mode sorting, I ended up with a messy 4x4 eigenvector and diagonal eigenvalue matrices. However this is also means that the amplitude c will be a 4x1 column matrix (the first two values of the column matrix indicate forward propagation and the other two represents backward propagation), instead of 2x1 column matrix shown in the lecture video. My question is, does each of the four amplitude c has unique meaning? Thank you.
@empossible1577
@empossible1577 8 жыл бұрын
Even with the 2x2 method, you end up with a 4x1 column vector. This is hidden by the fact that we lump both polarizations into a single 2x1 column vector that we treat as a single coefficient. So the final 2x1 column vector is actually 4x1. The meaning of the for quantities is this. The first two are the amplitudes of the two forward modes and the second two are the amplitudes of the two backward modes. I have added an extra slide about the actual size of the matrices in the latest version of the notes that you can get from the course website. I have not rerecorded the lectures. Here is a link to the course website: emlab.utep.edu/ee5390cem.htm Specifically, see Lecture 5, slide 19. it is not everything, but hopefully it helps.
@vahagnmkhitaryan8261
@vahagnmkhitaryan8261 6 жыл бұрын
Dear Prof. Raymond. First of all, thank you for these amazing lecture series. I was wondering about the sign of the eignevalue exponentials in the fully ansiotropic case. After sorting the eigenvalues, you have also changed the sign of the lambda ^{minus} matrix. This is appearing also in the corrected notes. I was wondering if the change of the sign is necessary? If one does this he/she ends up in having four forward propagating modes instead of two forward and two backward waves. Or I miss something? Thanks again for the amazing work you are are doing.
@empossible1577
@empossible1577 6 жыл бұрын
Can you please point out the specific slide you are talking about? I should not have been changing signs, but I suspect I am confused about your question. Very sorry!
@vahagnmkhitaryan8261
@vahagnmkhitaryan8261 6 жыл бұрын
Dear Prof. Raymond, I am referring to slide 43 of lecture 4, or summarizing slide number 4 in lecture 5b , where the "lambda plus", "lambda minus" matrices are defined. Just above them, it is written exp("lambda plus") and exp(- "lambda minus"). I refer to the minus sign in this second expression. After sorting, "lambda minus" is in principle a diagonal matrix with complex elements that have negative imaginary parts. As far as I understand, when we multiply this numbers with "minus" sign, we reverse the direction of propagation of these modes. Thanks again
@empossible1577
@empossible1577 6 жыл бұрын
Oh, I think I understand your question and I see where this is confusing. I should make a note of this on the slide. I did not intend for the negative sign to imply you actually make something negative. Instead, it was intended to indicate that is simply where you put the eigen-values for the backward waves. Does this make sense? Sorry for the confusion.
@vahagnmkhitaryan8261
@vahagnmkhitaryan8261 6 жыл бұрын
Yes, thanks. That explains what I was confused about. Thank you very much for everything.
@easonxiao3383
@easonxiao3383 Жыл бұрын
Thanks very much for the video. I have some questions: how to calculate the absorption in each layer, and how to calculate the absorption in each position?
@empossible1577
@empossible1577 Жыл бұрын
To do this, you will need to calculate the internal fields and then integrate sigma*|E|^2 throughout the layer, or layers, of interest. I have some crude notes on how to do this here in Lecture 2i TMM Extras here: empossible.net/emp5337/
@asadoncasp
@asadoncasp 7 жыл бұрын
Lovely sir, Is there any Matrix equation containing simple Fresnel equation for air to die-electric medium ?
@empossible1577
@empossible1577 7 жыл бұрын
Not that I know of. There are matrix equations that will predict the same effects as the Fresnel equations. You could also derive the Fresnel equations from a TMM-like matrix approach, but the matrices are still not the Fresnel equations.
@podolsky81
@podolsky81 9 жыл бұрын
Following up on my last comment, we do not need to adjust the eigenvectors. It's just the definition you used for the Poynting vector is ExH instead of ExH* and this fixes things in the end. Prefer though that we start with the exp(-i*omega*t) convention from the start.
@zilongqin5578
@zilongqin5578 5 жыл бұрын
I agree with you, the eigenvalue rearrangement is unecessory.
@wafaaazouzi902
@wafaaazouzi902 7 жыл бұрын
Hi Professor Raymond,i am looking for MMT code for Matlab simulation, but i have no idea how i can get this package or code, my question how can i get MMT implemented Matlab Mathworks, thank you for reading
@empossible1577
@empossible1577 7 жыл бұрын
Do you mean TMM? As a rule, we do not ever give out any codes. The block diagrams in the notes are as close to code as it gets. There are also benchmarking documents available on the course website that shows all of the intermediate results. Essentially I just run the MATLAB code without any semicolons after the calculations. Here is the course website: emlab.utep.edu/ee5390cem.htm You can look at some open source tools at the following website. Just scroll down maybe 2/3 of the way under the heading "Electromagnetic Simulation Software." There should be something here for you if you do not want to write your own code. emlab.utep.edu/opensource.htm
@winnis88
@winnis88 8 жыл бұрын
Prof Raymond, I do not understand fully why the TMM method is unstable. Yes it treats everything as forward moving waves, however in the eigenvalue matrix, we get eigen-values with positive and negative signs indicating the direction of propagation of the wave. This should take care of the direction of propagation of the waves in the final solution after multiplication with the mode coefficient. In (47:01), when you make a new matrix with exp(+lambda*z) and exp(-lambda*z), are you not forcing the backward wave to become forward wave?
@empossible1577
@empossible1577 8 жыл бұрын
+_jp88 Good question. First, just consider basic exponentials...exp(+az) explodes when a>0 and z is always increasingexp(-az) decays nicely when a>0 and z is always increasingWhen treating waves as forward propagating, the z term above is always increasing. Since a>0, it all comes down to the sign of the exponential. The exp(+az) explodes. If the exp(+az) where treated as a backward wave, then z would be decreasing, not increasing, and exp(+az) would then decay nicely.
@winnis88
@winnis88 8 жыл бұрын
+CEM Lectures Thank you very much. I think I understand.
@zilongqin5578
@zilongqin5578 5 жыл бұрын
@@empossible1577 I think the rearrangement is unecessory, the eigenvalue give a Complex Propagation Constant based upon your assumed positive z direction, it is inversed already, i. e. the wavenumber is negative already.
@empossible1577
@empossible1577 5 жыл бұрын
@@zilongqin5578 The purpose of the rearrangement is not to the sign correct. As you pointed it, it is already correct. The purpose is the separate the forward and backward waves so that they can be properly handled by scattering matrices. In principle you can do this without rearranging, but your scattering matrices would be ugly and unconventional.
@myung-giji1409
@myung-giji1409 4 жыл бұрын
Hello Professor Raymond, I'm trying to figure out the Eigenvector W, and Eigen-value (Lambda). However, how can I reach to that answer as described in the 'Getting a Feel for the Numbers'? There is no equation or formula for that. I'm stuck in that points. :) Thanks,
@empossible1577
@empossible1577 4 жыл бұрын
The eigen-vector matrix W and eigen-value matrix LAM are calculated by solving the matrix OMEGA as an eigen-value problem. The OMEGA matrix is discussed around 16:35. Once you have OMEGA, in MATLAB, you would calculate W and LAM as [LAM,W] = eig(OMEGA);
@myung-giji1409
@myung-giji1409 4 жыл бұрын
@@empossible1577 Interesting. That works. So, this means that we can obtain the LAM and W through the function "eig[]" in MATLAB. Thanks for your help :)
@empossible1577
@empossible1577 4 жыл бұрын
@@myung-giji1409 Yes. In fact, that is the only way when (if) you move onto RCWA or method of lines.
@michaelscottspencer3997
@michaelscottspencer3997 4 жыл бұрын
At 15:20 you provide the format for anisotropic materials. Would it be incorrect to use the diagonal representation of the relative permeability and permittivity, in which case it is only slightly more complicated? Or is there some physics that gets lost when assuming a diagonal (but multi-valued) tensor dielectric function.
@empossible1577
@empossible1577 4 жыл бұрын
If you have isotropic (or even diagonally anisotropic) materials, the whole method simplifies to what is called the 2x2 TMM. Keep watching the videos because that is what I implement.
@michaelscottspencer3997
@michaelscottspencer3997 4 жыл бұрын
@@empossible1577 Thanks for the quick reply! I'll just carry on then.
@willeth6986
@willeth6986 4 жыл бұрын
Awesome. Do you have a lesson on how to implement in matlab?
@empossible1577
@empossible1577 4 жыл бұрын
Let me point you to the official course website. You can download the notes, get links to the videos, and have links to other resources like MATLAB codes. empossible.net/academics/emp5337/
@JordanEdmundsEECS
@JordanEdmundsEECS 4 жыл бұрын
1. Why does the magnetic field have different eigenvectors but the same eigenvalues as the electric field? (Lec 2a, slide 54). The eigenvalues being the same make sense because the E- and H-fields propagate together, and the eigenvectors being different makes sense if P and Q do not commute (in the case of these matrices they do, but I suspect their generalizations do not). 2. How are we justified in assuming that the solution for the magnetic field has the same c+ and c- coefficients as for the electric field, not different ones? It seems like these should just be arbitrary coefficients in both cases, and I'm not sure how they should be related.
@empossible1577
@empossible1577 4 жыл бұрын
1. In ordinary media, the amplitude of H is adjusted consistent with impedance. In anisotropic media, it is crazier. Otherwise, I think you are getting this. 2. E and H of a wave must reflect the same. Otherwise, they would not satisfy the impedance of the medium. It is a single wave so it only reflects one way.
@wolfwolf7121
@wolfwolf7121 6 жыл бұрын
Hello, is it possible to implement the TMM in 2D and 3D cases ? Thank you for the lectures
@empossible1577
@empossible1577 6 жыл бұрын
Yes, absolutely. These techniques go by many names including method of lines, rigorous coupled-wave analysis, Fourier modal method, transfer matrix method with a plane wave basis, eigen-mode expansion technique, and more. If you follow this set of lectures, you will learn rigorous coupled-wave analysis and the method of lines. Also, here is a link to the course website which has links to the videos as well as the latest version of the notes and other resources to help you. emlab.utep.edu/ee5390cem.htm
@wolfwolf7121
@wolfwolf7121 6 жыл бұрын
Thank you I calculated the transmittance and reflection coefficients for different layers with different incident angles but I cannot build the 2D visualization of wave spreading like in your lectures about the FDTD method. How I see in 2D, 3D it is better to use FDTD method.
@empossible1577
@empossible1577 6 жыл бұрын
Which pictures of waves spreading are you talking about? Which method is better depends more on the device you are simulating and the physics you want to incorporate. FDTD is easily parallelized so you will see it for simulation very large structures. It is also a time-domain method so you will see it for transient analysis and devices incorporating nonlinear material properties. For small to moderate size devices, frequency-domain methods are usually faster and more accurate. For full 3D, it is hard to beat the variational methods like finite element, method of moments, etc. In special cases, like periodic dielectric structures, rigorous coupled-wave analysis is extremely good and will outperform finite element in speed and efficiency. For 2D simulations, my group really likes finite-difference frequency-domain. For 1D, we like TMM. I could go on and on comparing the methods, but so far I have not found one single method that is better than the rest. I have only found ones that seems best for a specific device.
@wolfwolf7121
@wolfwolf7121 6 жыл бұрын
After I calculated the transmittance and reflection coefficients I choose some frequencies and built from them a pulse (for example |E(z)|^2: kzfaq.info/get/bejne/iL2GjZeautulo30.html). Can I draw something like this kzfaq.info/get/bejne/nJafpcZ3l9vGeac.html (1:56) with help of the TMM method?
@prishti
@prishti 4 жыл бұрын
SIr, I am an electrical engineering student. Sir how Transfer Matrix Method is used for solar pv module as it a multilayered structure.
@empossible1577
@empossible1577 4 жыл бұрын
It can be used to solve for how much energy gets absorbed by the cell in order to predict efficiency and perhaps produce better designs. TMM is used often to optimize antireflection layers as well.
@level-zj2xq
@level-zj2xq 2 жыл бұрын
Sir how this model used for multilayer
@level-zj2xq
@level-zj2xq 2 жыл бұрын
Harish what you do in solar cell how you use this method
@asadoncasp
@asadoncasp 7 жыл бұрын
Respected Sir May you live long!. why we need to use TMM instead of using Frensel's equation. i am apologies to say that sound quality is very pour and have distortion. It will be blessing for students if you use whiteboard in your future classes. so that its become students friendly for non native English speakers
@empossible1577
@empossible1577 7 жыл бұрын
The Fresnel equations are great for quantifying reflection and transmission from a single interface. However, what if you have a stack of five different layers? In this configuration there are many interfaces and there arises a very complex arrangement of scattering that leads to some overall reflection and transmission. The Fresnel equations cannot account for multiple interfaces. TMM is used a lot in thin film optics where sometimes there can be hundreds of layers.
@empossible1577
@empossible1577 7 жыл бұрын
Sorry about the sound quality. You are listening to some of the first lectures I ever recorded. The sound quality will get better. When I get time, I will go back and rerecord the lectures.
@asadoncasp
@asadoncasp 7 жыл бұрын
Thanking you with anticipation. now i have eaten your words!
@abrarahmed2621
@abrarahmed2621 7 жыл бұрын
Hi Professor Raymond,i am looking for TMM .P file for Matlab simulation, so that after implementation we can verify it with your result. my question how can i get sanctification for TMM implemented Matlab Mathworks is 100% correct, thank you for reading
@empossible1577
@empossible1577 7 жыл бұрын
First, let me point you to the official course website. This contains the latest version of the notes, links to the videos, and other resources to help you. emlab.utep.edu/ee5390cem.htm Once there, you will find a link to a "Benchmarking Aid for TMM" near the bottom of the Homework section. This sets up a two-layer problem and provides the values of most every intermediate parameter through the entire TMM algorithm. The block diagram for the algorithm is shown on Slide 48 in the latest version of Lecture 5. With the block diagram and benchmarking aid, you should be able to get your own TMM code working without the need of the p file you suggested. Hope this helps!!
@abrarahmed2621
@abrarahmed2621 7 жыл бұрын
But It does not have any .p or source code of 1st four Home work. :( Can you share or upload it ?
Lecture 5 (CEM) -- TMM Using Scattering Matrices
1:15:42
EMPossible
Рет қаралды 22 М.
Transfer Matrix Method Explained
16:45
Jordan Louis Edmunds
Рет қаралды 35 М.
Опасность фирменной зарядки Apple
00:57
SuperCrastan
Рет қаралды 10 МЛН
Heartwarming Unity at School Event #shorts
00:19
Fabiosa Stories
Рет қаралды 23 МЛН
Lecture 1 (CEM) -- Introduction to CEM
1:02:28
EMPossible
Рет қаралды 74 М.
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Bloomberg Originals
Рет қаралды 1,2 МЛН
Terence Tao, "Machine Assisted Proof"
54:56
Joint Mathematics Meetings
Рет қаралды 173 М.
How (and why) to raise e to the power of a matrix | DE6
27:07
3Blue1Brown
Рет қаралды 2,8 МЛН
Lecture 9 (CEM) -- Finite-Difference Method
1:07:44
EMPossible
Рет қаралды 85 М.