Patrick Potts
Lund University
ZOOM LINK TO JOIN IN: http://s.ic.fo/QTD_PatrickPotts
Monday Oct 19, 2020 / 15:00-16:00 CEST
Probabilistically violating the first law of thermodynamics in a quantum heat engine
At the nanoscale, thermodynamic concepts including entropy, work, and heat, become fluctuating quantities. Their description requires new rules that go beyond the standard laws of thermodynamics. A striking example is given by probabilistic violations of the second law, which are possible since entropy only has to increase on average. For classical systems, the first law of thermodynamics is ensured to hold even on the level of probabilities due to energy conservation. In quantum systems however, separating energy into heat and work may imply that these quantities cannot be known simultaneously. A description of their fluctuations will then result in probabilistic violations of the first law. Here we investigate such violations with a detailed case study, based on a thermoelectric heat engine that relies on photo-assisted Cooper pair tunneling.
Authors: Timo Kerremans, Peter Samuelsson, Patrick P. Potts
Hi Patrick,
Thank you for your nice talk. It seems to be interesting that the first law can be probabilistically violated at the trajectory level.
I have a question regarding this result. The two-point measurement in the definition of the heat does not affect the dynamics on average. However, at the trajectory level, I believe that it does, and energy may be pumped into the system during measurement. Is it the reason for the violation of the first law, given the definitions of heat and work in your theoretical framework? Maybe I have misunderstood somewhere. Anyway, I would like to hear your opinion on this. Thank you again for your talk.
Hi Tan,
Thanks for your interest. The two-point measurement scheme I discussed involved measurements only on the thermal baths. Since we consider weak coupling, the baths can, at the beginning of the process, be described by a Gibbs state, with no coherences in the energy basis. A projective measurement of the energy then does not alter the dynamics (not even on the trajectory level).
Indeed, you don’t need to invoke the two-point measurement scheme to define heat in our system. Heat is carried by photons that enter and leave the system. These “jumps” are described by a classical process in the weak coupling limit. Therefore, for a given run of the experiment, the number of photons, and therefore the heat, that was exchanged with each bath is well defined (and, at least in principle, measurable). As a consequence, it is possible to assign a fixed value for heat for each trajectory, in complete analogy to classical stochastic thermodynamics.
Work production on the other hand is a coherent process in our system. If we want to measure the work, we have to deal with measurement backaction. This difference between heat and work is what gives rise to the probabilitistic first law violations.
Hi Patrick,
thanks a lot for your very nice talk. I have a question concerning the Lindblad formulation. You mentioned that this formulation can lead to thermodynamic inconsistencies. Could you please point me to some literature where this is explained in more detail?
Thanks a lot!
Hi Alexander,
Thanks for your interest. There are lots of works on this subject. I can point you to a work where I was involved (https://iopscience.iop.org/article/10.1088/1367-2630/aa964f/meta), where you should find some relevant citations to the discussion as well as information on when the local master equation can be employed safely.
Best,
Patrick
Thanks Patrick for a very nice talk.
There were a few questions re the concept of work quasi-probability distributions today: these have been discussed eg in “Time-reversal symmetric work distributions for closed quantum dynamics in the histories framework” https://iopscience.iop.org/article/10.1088/1367-2630/aa703f/meta and references therein.
On the subject of measuing work/full counting statistics reference include https://journals.aps.org/pra/abstract/10.1103/PhysRevA.94.052103 and https://journals.aps.org/pra/abstract/10.1103/PhysRevA.96.052115.
Finally, I believe Philipp’s paper where heat and work always fulfil the first law — contrasting to what you described in your talk — is https://journals.aps.org/pre/abstract/10.1103/PhysRevE.100.022127
Hope this helps!
J
Hi Janet,
Thanks for these references, they are very useful!
I already started to look at Philipp’s work (and emailed with him). To my understanding, in his approach (as well as some other similar approaches) work is defined as an average value conditioned on measurement outcomes. In this average, some fluctuations are already averaged out. Depending on the choice of the measurements, more or less fluctuations are being averaged out. In our approach, we really want to get at the fluctuations of the “unmeasured” system, similar to what you did in your interesting NJP.
Aaah! If Philipp’s heat and work are averaged already, then there’s not even a conflict between what he says and you say. But is that so? I’m sure Philipp and you can get to the bottom of it and let us know in what manner your two approaches complement each other! J
Hello all!
Two comments from my side:
1) In my understanding, the way Patrick chooses to analyze work and heat fluctuations is within a certain theoretical framework, which is not uniquely determined by his physical setup. So the conclusion that “the first law does not hold at the trajectory level” only holds with respect to that specific framework and I wanted to raise awareness that this is not a general and unavoidable feature.
2) Concerning my reference about the “operational approach”. This relies on the best guess of heat and work based on whatever information you have about the system. Depending on how you measure your system, this can evolve additional averaging or not! In general, it does not assume to have precise microscopic information about the bath and therefore, the spirit of the operational approach is very different from Patrick’s approach. However, I found it worth to point out to it because I do not find any violations of the first law along a single trajectory (which somewhat supports my claim of point 1) above).
Thanks for your comments Philipp, and also for the stimulating email discussion. I believe that our conclusions are more general than what your comment implies.
You are of course correct that our conclusions depend on the theoretical framework. Indeed, as mentioned in my talk, you can simply define work by the procedure of inferring work from measuring heat. Then the first law holds by definition but this definition of work has little to do with an actual measurement of work.
Our approach is actually determined by the experimental scenario, if you add the condition that heat should be defined in the same way as for classical stochastic systems. As quantum mechanics does not impose any fundamental restrictions on measuring heat in our system, this seems a natural condition. Work is in electronic systems usually accessed by weakly measuring the electrical current. The cumulants that are attainable thereby are exactly the cumulants of the quasi-probability distribution we use for describing work fluctuations. The experimental scenario is therefore strongly supporting our choice for the work definition.
As for the generality of our results, we believe the probabilistic violations of the first law to appear whenever the definition of work is affected by a backaction/imprecision trade-off while the definition of heat is not. This is the case whenever you define heat analogously to classical stochastic thermodynamics (which you can at weak coupling) while you try to include quantum coherences in the energy eigenbasis in the definition for work. I believe such an approach is followed by a number of researchers in the field.
Cool! Now, we have an exciting discussion! 🙂
So again two comments from my side!
(1) First, Patrick says that “This is the case whenever you define heat analogously to classical stochastic thermodynamics”. However, I would be greatly surprised if anyone in classical stochastic thermodynamics ever proposed to measure heat by projective measurements of the bath. A central paradigm of classical stochastic thermodynamics is that all quantities should be accessible based on observing only the system. This is precisely the reason for its experimental power.
(2) That is probably related to some gap of knowledge on my side although I worked a long time on full counting statistics, also with various experts in this field. So I am still wondering why you say that these negative quasi-probabilities pop naturally out of full counting statistics experiments. I never observed this. At the end, all probabilities which you write down in a lab due to whatever statistics you access are positive. So starting with a probability distribution for work, which can be negative, might be very interesting from a theoretical sight, but I disagree to saying that this is the natural way.
Thanks for your comments, I am happy to clarify:
1) In stochastic thermodynamics, for systems that are described by a Markovian rate equation, heat can be defined by defining the heat exchanged with the environment in each jump. This is equivalent to what you get by measuring the energy of the bath at the beginning and at the end of the protocol (in classical stochastic TD you generally want your observables to not depend on your way of measuring them).
You can also define the heat in our system by assigning a value for heat for each quantum jump. This is essentially what we do when determining heat by the full counting statistics of photons. Our definition is thus completely analogous to the definition in classical stochastic TD. The two-point measurement scheme should be seen as a further motivation for this approach (which is by no means our invention). In practice, it may turn out to be feasible to measure these quantum jumps. There are some interesting ideas around to measure them by measuring temperature (see also Bayan Karimi’s talk).
Also note that if you have multiple baths, you cannot in general obtain the heat from each bath from only measuring the system (even for the classical case). The reason for this is that the same transition may be mediated by different baths.
2) This is actually a very interesting point that has kept the community of charge transport busy for some time. There were essentially two conceptual difficulties. 1. It was realized that the full counting statistics (FCS) approach developed by Levitov, Lee and Lesovik (https://aip.scitation.org/doi/abs/10.1063/1.531672) can result in negative probabilities (https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.87.197006). 2. When evaluating higher moments of transferred charge, current operators with different time arguments appear (since charge is the time-integral of current). It is a non-trivial issue to determine how one should time-order the operators in these expressions.
The resolution to both these problems comes from considering a linear detector (https://link.springer.com/article/10.1140%2Fepjb%2Fe2003-00293-1, an electronic circuit can be modeled as a linear detector). It turns out that the distribution describing the measurement is a convolution between the FCS and the Wigner function of the detector. The well known imprecision/backaction trade-off is then a result of the Heisenberg uncertainty relation for the detector. The current understanding is that the FCS describes the intrinsic fluctuations of the system, which is convoluted by measurement imprecision and backaction. This is nicely explained in (https://journals.aps.org/pra/abstract/10.1103/PhysRevA.84.043824) and I also tried to build on this insight in (https://quantum-journal.org/papers/q-2017-10-12-32/). The operator ordering that results from this treatment is the Keldysh time ordering.
In many scenarios, it turns out that a charge current measurement is backaction-free. In this case the FCS is a positive and measurable distribution. Negative values in the FCS appear in the presence of superpositions of different charge distributions. This happens in the presence of superconductors, as well as at short times due to coherences in the initial state. It is thus completely natural that people are experts on FCS without encountering any negative values.
I believe that the quantum thermodynamics community can learn a lot from these insights that the transport community had on fluctuations in phase-coherent systems. Essentially, when considering work fluctuations, we face a similar problem of time-ordering the power operators that appear in the higher moments.
Dear Patrick, your talk was very nice! I am interested in studying similar quantum heat engines moving away from weak coupling regime. For what concerns your work, how would you approach the problem when the coupling with the two heat baths is not so small?
In a recent paper,
arXiv:2004.02987
I studied a very simple instance of isothermal steady-state engine, where violations of thermodynamics uncertainty relations (TUR) due to quantum coherence in the working medium dynamics can occur. Do you expect violations of the static TUR in your system?
Thank you again for your talk!
Hi Loris,
Thanks for you interest. Concerning the TUR, I believe it to always be fulfilled in our system. The reason for this is that our system can be described as a network of coupled harmonic oscillators with a bilinear Hamiltonian. This paper (https://journals.aps.org/pre/abstract/10.1103/PhysRevE.100.042101) shows that for such systems the TUR always holds. Also, the variance in heat is equal to the variance in work, such that we don’t have to worry about any first law violations (they become apparent in the higher cumulants).
I don’t have a lot of expertise on strong coupling thermodynamics so I am not sure which approach would work best for this system. I am sure you can learn a lot about strong coupling from other talks at this conference though.
Patrick, I’ve just seen your talk about probabilistic violation of the first law of thermodynamics and found interesting although I’m still struggling to have a full picture of what you’ve developed. Is there any arxiv reference of this subject already? I’m not familiar with superconductivity Hamiltonian as you showed, however I have been studying path integral approach for quantum thermodynamics lately, in particular how to derive fluctuation theorems from it, and based on your talk I am interested to know how one could describe those joint probabilities through path integral formalism. If that’s already to your knowledge, I’d be very happy to know! Thank you again for your talk!
Hi Ricardo,
Thanks for your interest. Unfortunately, there is no preprint yet but we are working on it. Considering path integrals, this full counting statistics approach can usually be cast into a path integral formulation but you might know more about this than I do.