Miscellany

Thanksgiving

This year we give thanks for Arrow’s Impossibility Theorem. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, the Jarzynski equality, the moons of Jupiter, space, black hole entropy, and electromagnetism.)

Arrow’s Theorem is not a result in physics or mathematics, or even in physical science, but rather in social choice theory. To fans of social-choice theory and voting models, it is as central as conservation of momentum is to classical physics; if you’re not such a fan, you may never have even heard of it. But as you will see, there is something physics-y about it. Connections to my interests in the physics of democracy are left as an exercise for the reader.

Here is the setup. You have a set of voters {1, 2, 3, …} and a set of choices {A, B, C, …}. The choices may be candidates for office, but they may equally well be where a group of friends is going to meet for dinner; it doesn’t matter. Each voter has a ranking of the choices, from most favorite to least, so that for example voter 1 might rank D first, A second, C third, and so on. We will ignore the possibility of ties or indifference concerning certain choices, but they’re not hard to include. What we don’t include is any measure of intensity of feeling: we know that a certain voter prefers A to B and B to C, but we don’t know whether (for example) they could live with B but hate C with a burning passion. As Kenneth Arrow observed in his original 1950 paper, it’s hard to objectively compare intensity of feeling between different people.

The question is: how best to aggregate these individual preferences into a single group preference? Maybe there is one bully who just always gets their way. But alternatively, we could try to be democratic about it and have a vote. When there is more than one choice, however, voting becomes tricky.

This has been appreciated for a long time, for example in the Condorcet Paradox (1785). Consider three voters and three choices, coming out as in this table.

Voter 1Voter 2Voter 3
ABC
BCA
CAB

Then simply posit that one choice is preferred to another if a majority of voters prefer it. The problem is immediate: more voters prefer A over B, and more voters prefer B over C, but more voters also prefer C over A. This violates the transitivity of preferences, which is a fundamental postulate of rational choice theory. Maybe we have to be more clever.

So, much like Euclid did a while back for geometry, Arrow set out to state some simple postulates we can all agree a good voting system should have, then figure out what kind of voting system would obey them. The postulates he settled on (as amended by later work) are:

  • Nobody is a dictator. The system is not just “do what Voter 1 wants.”
  • Independence of irrelevant alternatives. If the method says that A is preferred to B, adding in a new alternative C will not change the relative ranking between A and B.
  • Pareto efficiency. If every voter prefers A over B, the group prefers A over B.
  • Unrestricted domain. The method provides group preferences for any possible set of individual preferences.

These seem like pretty reasonable criteria! And the answer is: you can’t do it. Arrow’s Theorem proves that there is no ranked-choice voting method that satisfies all of these criteria. I’m not going to prove the theorem here, but the basic strategy is to find a subset of the voting population whose preferences are always satisfied, and then find a similar subset of that population, and keep going until you find a dictator.

It’s fun to go through different proposed voting systems and see how they fall short of Arrow’s conditions. Consider for example the Borda Count: give 1 point to a choice for every voter ranking it first, 2 points for second, and so on, finally crowning the choice with the least points as the winner. (Such a system is used in some political contexts, and frequently in handing out awards like the Heisman Trophy in college football.) Seems superficially reasonable, but this method violates the independence of irrelevant alternatives. Adding in a new option C that many voters put between A and B will increase the distance in points between A and B, possibly altering the outcome.

Arrow’s Theorem reflects a fundamental feature of democratic decision-making: the idea of aggregating individual preferences into a group preference is not at all straightforward. Consider the following set of preferences:

Voter 1Voter 2Voter 3Voter 4Voter 5
AAADD
BBBBB
CDCCC
DCDAA

Here a simple majority of voters have A as their first choice, and many common systems will spit out A as the winner. But note that the dissenters seem to really be against A, putting it dead last. And their favorite, D, is not that popular among A’s supporters. But B is ranked second by everyone. So perhaps one could make an argument that B should actually be the winner, as a consensus not-so-bad choice?

Perhaps! Methods like the Borda Count are intended to allow for just such a possibility. But it has it’s problems, as we’ve seen. Arrow’s Theorem assures us that all ranked-voting systems are going to have some kind of problems.

By far the most common voting system in the English-speaking world is plurality voting, or “first past the post.” There, only the first-place preferences count (you only get to vote for one choice), and whoever gets the largest number of votes wins. It is universally derided by experts as a terrible system! A small improvement is instant-runoff voting, sometimes just called “ranked choice,” although the latter designation implies something broader. There, we gather complete rankings, count up all the top choices, and declare a winner if someone has a majority. If not, we eliminate whoever got the fewest first-place votes, and run the procedure again. This is … slightly better, as it allows for people to vote their conscience a bit more easily. (You can vote for your beloved third-party candidate, knowing that your vote will be transferred to your second-favorite if they don’t do well.) But it’s still rife with problems.

One way to avoid Arrow’s result is to allow for people to express the intensity of their preferences after all, in what is called cardinal voting (or range voting, or score voting). This allows the voters to indicate that they love A, would grudgingly accept B, but would hate to see C. This slips outside Arrow’s assumptions, and allows us to construct a system that satisfies all of his criteria.

There is some evidence that cardinal voting leads to less “regret” among voters than other systems, for example as indicated in this numerical result from Warren Smith, where it is labeled “range voting” and left-to-right indicates best-to-worst among voting systems.

On the other hand — is it practical? Can you imagine elections with 100 candidates, and asking voters to give each of them a score from 0 to 100?

I honestly don’t know. Here in the US our voting procedures are already laughably primitive, in part because that primitivity serves the purposes of certain groups. I’m not that optimistic that we will reform the system to obtain a notably better result, but it’s still interesting to imagine how well we might potentially do.

Thanksgiving Read More »

8 Comments

Thanksgiving

This year we give thanks for something we’ve all heard of, but maybe don’t appreciate as much as we should: electromagnetism. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, the Jarzynski equality, the moons of Jupiter, space, and black hole entropy.)

Physicists like to say there are four forces of nature: gravitation, electromagnetism, the strong nuclear force, and the weak nuclear force. That’s a somewhat sloppy and old-fashioned way of talking. In the old days it made sense to distinguish between “matter,” in the form of particles or fluids or something like that, and “forces,” which pushed around the matter. These days we know it’s all just quantum fields, and both matter and forces arise from the behavior of quantum fields interacting with each other. There is an important distinction between fermions and bosons, which almost maps onto the old-fashioned matter/force distinction, but not quite. If it did, we’d have to include the Higgs force among the fundamental forces, but nobody is really inclined to do that.

The real reason we stick with the traditional four forces is that (unlike the Higgs) they are all mediated by a particular kind of bosonic quantum field, called gauge fields. There’s a lot of technical stuff that goes into explaining what that means, but the basic idea is that the gauge fields help us compare other fields at different points in space, when those fields are invariant under a certain kind of symmetry. For more details, check out this video from the Biggest Ideas in the Universe series (but you might need to go back to pick up some of the prerequisites).

The Biggest Ideas in the Universe | 15. Gauge Theory

All of which is just throat-clearing to say: there are four forces, but they’re all different in important ways, and electromagnetism is special. All the forces play some kind of role in accounting for the world around us, but electromagnetism is responsible for almost all of the “interestingness” of the world of our experience. Let’s see why.

When you have a force carried by a gauge field, one of the first questions to ask is what phase the field is in (in whatever physical situation you care about). This is “phase” in the same sense as “phase of matter,” e.g. solid, liquid, gas, etc. In the case of gauge theories, we can think about the different phases in terms of what happens to lines of force — the imaginary paths through space that we would draw to be parallel to the direction of the force exerted at each point.

The simplest thing that lines of force can do is just to extend away from a source, traveling forever through space until they hit some other source. (For electromagnetism, a “source” is just a charged particle.) That corresponds to field being in the Coulomb phase. Infinitely-stretching lines of force dilute in density as the area through which they are passing increases. In three dimensions of space, that corresponds to spheres we draw around the source, whose area goes up as the distance squared. The magnitude of the force therefore goes as the inverse of the square — the famous inverse square law. In the real world, both gravity and electromagnetism are in the Coulomb phase, and exhibit inverse-square laws.

But there are other phases. There is the confined phase, where lines of force get all tangled up with each other. There is also the Higgs phase, where the lines of force are gradually absorbed into some surrounding field (the Higgs field!). In the real world, the strong nuclear force is in the confined phase, and the weak nuclear force is in the Higgs phase. As a result, neither force extends farther than subatomic distances.

Phases of gauge fields.

So there are four gauge forces that push around particles, but only two of them are “long-range” forces in the Coulomb phase. The short-range strong and weak forces are important for explaining the structure of protons and neutrons and nuclei, but once you understand what stable nuclei there are, there work is essentially done, as far as accounting for the everyday world is concerned. (You still need them to explain fusion inside stars, so here we’re just thinking of life here on Earth.) The way that those nuclei come together with electrons to make atoms and molecules and larger structures is all explained by the long-range forces, electromagnetism and gravity.

But electromagnetism and gravity aren’t quite equal here. Gravity is important, obviously, but it’s also pretty simple: everything attracts everything else. (We’re ignoring cosmology etc, focusing in on life here on Earth.) That’s nice — it’s good that we stay attached to the ground, rather than floating away — but it’s not a recipe for intricate complexity.

To get complexity, you need to be able to manipulate matter in delicate ways with your force. Gravity isn’t up to the task — it just attracts. Electromagentism, on the other hand, is exactly what the doctor ordered. Unlike gravity, where the “charge” is just mass and all masses are positive, electromagnetism has both positive and negative charges. Like charges repel, and opposite charges attract. So by deftly arranging collections of positively and negatively charged particles, you can manipulate matter in whatever way you like.

That pinpoint control over pushing and pulling is crucial for the existence of complex structures in the universe, including you and me. Nuclei join with electrons to make atoms because of electromagnetism. Atoms come together to make molecules because of electromagnetism. Molecules interact with each other in different ways because of electromagnetism. All of the chemical processes in your body, not to mention in the world immediately around you, can ultimately be traced to electromagnetism at work.

Electromagnetism doesn’t get all the credit for the structure of matter. A crucial role is played by the Pauli exclusion principle, which prohibits two electrons from inhabiting exactly the same state. That’s ultimately what gives matter its size — why objects are solid, etc. But without the electromagnetic interplay between atoms of different sizes and numbers of electrons, matter would be solid but inert, just sitting still without doing anything interesting. It’s electromagnetism that allows energy to move from place to place between atoms, both via electricity (electrons in motion, pushed by electromagnetic fields) and radiation (vibrations in the electromagnetic fields themselves).

So we should count ourselves lucky that we live in a world where at least one fundamental force is both in the Coulomb phase and has opposite charges, and give appropriate thanks. It’s what makes the world interesting.

Thanksgiving Read More »

19 Comments

The Zombie Argument for Physicalism (Contra Panpsychism)

The nature of consciousness remains a contentious subject out there. I’m a physicalist myself — as I explain in The Big Picture and elsewhere, I think consciousness is best understood as weakly-emergent from the ordinary physical behavior of matter, without requiring any special ontological status at a fundamental level. In poetic-naturalist terms, consciousness is part of a successful way of talking about what happens at the level of humans and other organisms. “Being conscious” and “having conscious experiences” are categories that help us understand how human beings live and behave, while corresponding to goings-on at more fundamental levels in which the notion of consciousness plays no role at all. Nothing very remarkable about that — the same could be said for the categories of “being alive” or “being a table.” There is a great deal of work yet to be done to understand how consciousness actually works and relates to what happens inside the brain, but it’s the same kind of work that is required in other questions at the science/philosophy boundary, without any great metaphysical leaps required.

Not everyone agrees! I recently went on a podcast hosted by philosophers Philip Goff (former Mindscape guest) and Keith Frankish to hash it out. Philip is a panpsychist, who believes that consciousness is everywhere, underlying everything we see around us. Keith is much closer to me, but prefers to describe himself as an illusionist about consciousness.

S02E01 Sean Carroll: Is Consciousness Emergent?

Obviously we had a lot to disagree about, but it was a fun and productive conversation. (I’m nobody’s panpsychist, but I’m extremely impressed by Philip’s willingness and eagerness to engage with people with whom he seriously disagrees.) It’s a long video; the consciousness stuff starts around 17:30, and goes to about 2:04:20.

But despite the length, there was a point that Philip raised that I don’t think was directly addressed, at least not carefully. And it goes back to something I’m quite fond of: the Zombie Argument for Physicalism. Indeed, this was the original title of a paper that I wrote for a symposium responding to Philip’s book Galileo’s Error. But in the editing process I realized that the argument wasn’t original to me; it had appeared, in somewhat different forms, in a few previous papers:

  • Balog, K. (1999). “Conceivability, Possibility, and the Mind-Body Problem,” The Philosophical Review, 108: 497-528.
  • Frankish, K. (2007). “The Anti-Zombie Argument,” The Philosophical Quarterly, 57: 650-666.
  • Brown, R. (2010). “Deprioritizing the A Priori Arguments against Physicalism,” Journal of Consciousness Studies, 17 (3-4): 47-69.
  • Balog, K. (2012). “In Defense of the Phenomenal Concept Strategy,” Philosophy and Phenomenological Research, 84: 1-23.
  • Campbell, D., J. Copeland and Z-R Deng 2017. “The Inconceivable Popularity of Conceivability Arguments,” The Philosophical Quarterly, 67: 223—240.

So the published version of my paper shifted the focus from zombies to the laws of physics.

The idea was not to explain how consciousness actually works — I don’t really have any good ideas about that. It was to emphasize a dilemma that faces anyone who is not a physicalist, someone who doesn’t accept the view of consciousness as a weakly-emergent way of talking about higher-level phenomena.

The dilemma flows from the following fact: the laws of physics underlying everyday life are completely known. They even have a name, the “Core Theory.” We don’t have a theory of everything, but what we do have is a theory that works really well in a certain restricted domain, and that domain is large enough to include everything that happens in our everyday lives, including inside ourselves. I won’t rehearse all the reasons we have for believing this is probably true, but they’re in The Big Picture, and I recently wrote a more technical paper that goes into some of the details:

Given that success, the dilemma facing the non-physicalist about consciousness is the following: either your theory of consciousness keeps the dynamics of the Core Theory intact within its domain of applicability, or it doesn’t. There aren’t any other options! I emphasize this because many non-physicalists are weirdly cagey about whether they’re going to violate the Core Theory. In our discussion, Philip suggested that one could rely on “strong emergence” to create new kinds of behavior without really violating the CT. You can’t. The fact that the CT is a local effective field theory completely rules out the possibility, for reasons I talk about in the above two papers.

That’s not to say we are certain the Core Theory is correct, even in its supposed domain of applicability. As good scientists, we should always be open to the possibility that our best current theories will be proven inadequate by future developments. It’s absolutely fine to base your theory of consciousness on the idea that the CT will be violated by consciousness itself — that’s one horn of the above dilemma. The point of “Consciousness and the Laws of Physics” was simply to emphasize the extremely high standard to which any purported modification should be held. The Core Theory is extraordinarily successful, and to violate it within its domain of applicability means not only that we are tweaking a successful model, but that we are somehow contradicting some extremely foundational principles of effective field theory. And maybe consciousness does that, but I want to know precisely how. Show me the equations, explain what happens to energy conservation and gauge invariance, etc.

Increasingly, theorists of consciousness appreciate this fact. They therefore choose the other horn of the dilemma: leave the Core Theory intact as a theory of the dynamics of what happens in the world, but propose that a straightforward physicalist understanding fails to account for the fundamental nature of the world. The equations might be right, in other words, but to account for consciousness we should posit that Mind (or something along those lines) underlies all of the stuff obeying those equations. It’s not hard to see how this strategy might lead one to a form of panpsychism.

That’s fine! You are welcome to contemplate that. But then we physicalists are welcome to tell you why it doesn’t work. That’s precisely what the Zombie Argument for Physicalism does. It’s not precisely an argument for physicalism tout court, but for the superiority of physicalism over a non-physicalist view that purports to explain consciousness while leaving the behavior of matter unaltered.

Usually, of course, the zombie argument is deployed against physicalism, not for it. I know that. We find ourselves in the presence of irony.

The Zombie Argument for Physicalism (Contra Panpsychism) Read More »

92 Comments

Core Theory T-Shirts

Way back when, for purposes of giving a talk, I made a figure that displayed the world of everyday experience in one equation. The label reflects the fact that the laws of physics underlying everyday life are completely understood.

So now there are T-shirts. (See below to purchase your own.)

Core Theory T-shirt

It’s a good equation, representing the Feynman path-integral formulation of an amplitude for going from one field configuration to another one, in the effective field theory consisting of Einstein’s general theory of relativity plus the Standard Model of particle physics. It even made it onto an extremely cool guitar.

I’m not quite up to doing a comprehensive post explaining every term in detail, but here’s the general idea. Our everyday world is well-described by an effective field theory. So the fundamental stuff of the world is a set of quantum fields that interact with each other. Feynman figured out that you could calculate the transition between two configurations of such fields by integrating over every possible trajectory between them — that’s what this equation represents. The thing being integrated is the exponential of the action for this theory — as mentioned, general relativity plus the Standard Model. The GR part integrates over the metric, which characterizes the geometry of spacetime; the matter fields are a bunch of fermions, the quarks and leptons; the non-gravitational forces are gauge fields (photon, gluons, W and Z bosons); and of course the Higgs field breaks symmetry and gives mass to those fermions that deserve it. If none of that makes sense — maybe I’ll do it more carefully some other time.

Gravity is usually thought to be the odd force out when it comes to quantum mechanics, but that’s only if you really want a description of gravity that is valid everywhere, even at (for example) the Big Bang. But if you only want a theory that makes sense when gravity is weak, like here on Earth, there’s no problem at all. The little notation k < Λ at the bottom of the integral indicates that we only integrate over low-frequency (long-wavelength, low-energy) vibrations in the relevant fields. (That's what gives away that this is an "effective" theory.) In that case there's no trouble including gravity. The fact that gravity is readily included in the EFT of everyday life has long been emphasized by Frank Wilczek. As discussed in his latest book, A Beautiful Question, he therefore advocates lumping GR together with the Standard Model and calling it The Core Theory.

I couldn’t agree more, so I adopted the same nomenclature for my own upcoming book, The Big Picture. There’s a whole chapter (more, really) in there about the Core Theory. After finishing those chapters, I rewarded myself by doing something I’ve been meaning to do for a long time — put the equation on a T-shirt, which you see above.

I’ve had T-shirts made before, with pretty grim results as far as quality is concerned. I knew this one would be especially tricky, what with all those tiny symbols. But I tried out Design-A-Shirt, and the result seems pretty impressively good.

So I’m happy to let anyone who might be interested go ahead and purchase shirts for themselves and their loved ones. Here are the links for light/dark and men’s/women’s versions. I don’t actually make any money off of this — you’re just buying a T-shirt from Design-A-Shirt. They’re a little pricey, but that’s what you get for the quality. I believe you can even edit colors and all that — feel free to give it a whirl and report back with your experiences.

Core Theory T-Shirts Read More »

29 Comments

Auction: Multiply-Signed Copy of Why Evolution Is True

Here is a belated but very welcome spinoff of our Moving Naturalism Forward workshop from 2012: Jerry Coyne was clever enough to bring along a copy of his book, Why Evolution Is True, and have all the participants sign it. He subsequently gathered a few more distinguished autographs, and to make it just a bit more beautiful, artist Kelly Houle added some original illustrations. Jerry is now auctioning off the book to benefit Doctors Without Borders. Check it out:

weit2

weit3

Here is the list of signatories:

  • Dan Barker
  • Sean Carroll
  • Jerry Coyne
  • Richard Dawkins
  • Terrence Deacon
  • Simon DeDeo
  • Daniel Dennett
  • Owen Flanagan
  • Anna Laurie Gaylor
  • Rebecca Goldstein
  • Ben Goren
  • Kelly Houle
  • Lawrence Krauss
  • Janna Levin
  • Jennifer Ouellette
  • Massimo Pigliucci
  • Steven Pinker
  • Carolyn Porco
  • Nicholas Pritzker
  • Alex Rosenberg
  • Don Ross
  • Steven Weinberg

Jerry is hoping it will fetch a good price to benefit the charity, so we’re spreading the word. I notice that a baseball signed by Mickey Mantle goes for about $2000. In my opinion a book signed by Steven Weinberg alone should go for even more, so just imagine what this is worth. You have ten days to get your bids in — and if it’s a bit pricey for you personally, I’m sure there’s someone who loves you enough to buy it for you.

Auction: Multiply-Signed Copy of Why Evolution Is True Read More »

38 Comments

A Simple Form of Poker “Essentially” Solved

You know it’s a good day when there are refereed articles in Science about poker. (Enthusiasm slightly dampened by the article being behind a paywall, but some details here.)

Poker, of course, is a game of incomplete information. You don’t know your opponent’s cards, they don’t know yours. Part of your goal should be to keep it that way: you don’t want to give away information that would let your opponent figure out what you have.

As a result, the best way to play poker (against a competent opponent) is to use a mixed strategy: in any given situation, you want to have different probabilities for taking various actions, rather than a deterministic assignment of the best thing to do. If, for example, you always raise with certain starting hands, and always call with others, an attentive player will figure that out, and thereby gain a great deal of information about your hand. It’s much better to sometimes play weak hands as if they are strong (bluffing) and strong hands as if they are weak (slow-playing). The question is: how often should you be doing that?

Now researchers at a University of Alberta group that studies computerized poker has offered and “essentially” perfect strategy for a very simple form of poker: Heads-Up Limit Hold’em. In Hold’em, each player has two “hole” cards face down, and there are five “board” cards face-up in the middle of the table; your hand is the best five-card combination you can form from your hole cards and the board. “Heads-up” means that only two players are playing (much simpler than a multi-player game), and “limit” means that there is any bet comes in a single pre-specified amount (much simpler than “no-limit,” where you can bet anything from a fixed minimum up to the size of your stack or your opponent’s, whichever is smaller).

A simple game, but not very simple. Bets occur after each player gets their hole cards, and again after three cards (the “flop”) are put on the board, again after a fourth card (the turn), and finally after the last board card (the river) is revealed. If one player bets, the other can raise, and then the initial better can re-raise, up to a number of bets (typically four) that “caps” the betting.

gl_10537

So a finite number of things can possibly happen, which makes the game amenable to computer analysis. But it’s still a large number. There are about 3×1017 “states” that one can reach in the game, where a “state” is defined by a certain number of bets having been made as well as the configuration of cards that have already been dealt. Not easy to analyze! Fortunately (or not), as a player with incomplete information you won’t be able to distinguish between all of those states — i.e. you don’t know your opponent’s hole cards. So it turns out that there are about 3×1014 distinct “decision points” from which a player might end up having to act.

So all you need to do is: for each of those 300 trillion possibilities, assign the best possible mixed strategy — your probability to bet/check if there hasn’t already been a bet, fold/call/raise if there has — and act accordingly. Hey, nobody ever said being a professional poker player would be easy. (As you might know, human beings are very bad at randomness, so many professionals use the second hand on a wristwatch to generate pseudo-random numbers and guide their actions.)

Nobody is going to do that, of course. …

A Simple Form of Poker “Essentially” Solved Read More »

20 Comments

Summer Institute in Philosophy of Cosmology, Santa Cruz

This summer UC Santa Cruz will host a Summer Institute in Philosophy of Cosmology, from June 23 to July 15. There will be a great lineup of speakers, not to mention me. The “philosophy of cosmology” isn’t really a recognized intellectual discipline as yet, but some of us are trying to bring it into existence, so it’s an exciting time.

This is more of a summer school than a conference, so students and postdocs with an interest in the field should certainly think of applying. The deadline for applications is March 15, so don’t wait too long!

Summer Institute in Philosophy of Cosmology, Santa Cruz Read More »

6 Comments

DonorsChoose 2012

DonorsChoose is a great program that lets people give small (or large, if that’s how they roll) charitable donations targeted at specific classrooms and educational programs around the country. We have participated frequently in the past, but this year we didn’t quite get our act together. But it doesn’t matter who sets up the donors page, there are many great programs out there looking for support.

So instead, this year we’re pointing people to Aatish Bhatia’s donor page. You might remember Aatish as the winner of this year’s 3 Quarks Daily blogging prize. Now he’s assembled a collection of worthy science education projects. Go throw a few bucks and feel good about yourself and the world!

DonorsChoose 2012 Read More »

2 Comments
Scroll to Top