Wikipedia talk:WikiProject Physics/Archive March 2025
![]() | This is an archive of past discussions on Wikipedia:WikiProject Physics. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
There is talk of stopping citation bot from adding bibcodes, please comment. Headbomb {t · c · p · b} 17:33, 1 March 2025 (UTC)
Tensor categories: content, notability etc
A relatively new editor @Meelo Mooses has over the last couple of weeks created at least 10 (!) new pages, Modular tensor category, Fibonacci category, Fibonacci anyons, Algebraic theory of topological quantum information, Unitary modular tensor category, Bruguières modularity theorem, Modular group representation, Rank-finiteness for fusion categories, Schauenburg-Ng theorem, Müger's theorem; added a large amount of new content to an existing BLP Alexei Kitaev and created a new (Wikipedia) category Category:Topological quantum mechanics. I am fairly certain that some (perhaps most) have Wikipedia problems, for instance not encyclopedic, peacock terms, written like essays etc -- I have tagged a few of the pages in WP:NPP, not all. Those are perhaps not unsurprising for a new editor.
More critical is to what extent these are all notable and/or duplicated by existing articles. Most of these appear to be related to aspects of theoretical physics, quantum field theory, quantum information (although they are not showing up as new physics pages). This is a bit outside my comfort zone, so I am looking for comments here, or please add to the appropriate talk pages. (If you "adopt" some of these please let others know as this is a BIG list of pages to overview.)
N.B., I am going to cross-post to Talk Math because most of the pages start with "In Mathematics", although I am not sure if that is the right categorization. I am going to ask that people from that project post here to try and minimize overlaps. Ldm1954 (talk) 19:34, 1 March 2025 (UTC)
- Just an idle observation: several of these are related to Topological quantum computer, a concept created by Kitaev in 1992 and used evidently in Microsoft's new gizmo. So these may be concepts with relatively new notability and interest. Johnjbarton (talk) 19:47, 1 March 2025 (UTC)
- Thank you for taking the time to ensure that my pages are high quality and follow wikipedia standards!
- Maybe I can add a little bit of context around the pages. I first intended on writing two pages (modular tensor categories and Fibonacci category) to serve as a hub for the key facts about these objects. Editors then flagged my pages for several issues. Notably, it seems like they contained too much information ("written like essays").
- So, in an effort to be in-line with wikipedia policies, I decided to break up the two big pages into a lot of little pages (hence the creation of 10ish new pages in a short amount of time). I've really appreciated the input of my fellow wikipedia editors.
- There two points which I'm failing to understand - my alleged use of "peacock terms" and the extent to which my articles are "duplicated by existing articles".
- As for peacock terms - I understand what a peacock term is and why we shouldn't use them, but I can't find myself using them in any in the articles. I'd appreciate being pointed to an offending example.
- As for my articles duplicating other pages - yes, I completely agree that these topics are related to theoretical physics, quantum field theory, quantum information. However, the information in my pages are not contained in those other pages. I've talked to a lot of my colleagues (admittedly, they are all academics and not members of the "general population") who have wanted to see pages about modular tensor categories and related topics for a while now. I'm making these pages largely based on popular demand from people I know who think that these topics have not been not accurately served by wikipedia. Meelo Mooses (talk) 19:57, 1 March 2025 (UTC)
- Just to clarify about "peacock", in Fibonacci category there are (bolded) peacock terms:
- "several notable algebraic"
- "his seminal 1989 paper"
- "A key insight"
- and perhaps a few more.These are quite minor issues that are easily fixed. Ldm1954 (talk) 20:13, 1 March 2025 (UTC)
- I would just add that any modifier beyond a simple fact in a primary source should be verifiable in a secondary source. For example, "his seminal 1989 paper" should have a source that talks specifically about the 1989 paper and characterizes it as "seminal" or similar. Otherwise these words are not needed. We should only be including notable, seminal, and key information anyway. Johnjbarton (talk) 20:20, 1 March 2025 (UTC)
- Point taken.
- Though, I disagree that we should only be talking about "seminal" and "key" information. In technical matters such as the ones discussed in that article, the overall scientific literature leading to the conclusion being described spans dozens of articles. There are a lot of papers which included intermediate insights (which I certainly describe as "seminal", and I would also not describe as the "key" steps) that are important, and I feel are worthwhile to be discussed in wikipedia pages.
- (For instance, in that paragraph when I use the words " and thus when appropriately used can allow one to resolve BQP-complete problems" I am sweeping under the rug a series of several intermediate papers/insights) Meelo Mooses (talk) 20:55, 1 March 2025 (UTC)
- @Meelo Mooses thanks for your dedication to writing about these things. Please read through MOS:MATH and especially MOS:MATH#NOWE (which covers more than just "we"). Also I'm a little concerned about the sourcing and potentially the notability of some of these topics; at e.g. Algebraic theory of topological quantum information a lot of the sources are primary and many are self published. ByVarying | talk 04:16, 3 March 2025 (UTC)
- I would just add that any modifier beyond a simple fact in a primary source should be verifiable in a secondary source. For example, "his seminal 1989 paper" should have a source that talks specifically about the 1989 paper and characterizes it as "seminal" or similar. Otherwise these words are not needed. We should only be including notable, seminal, and key information anyway. Johnjbarton (talk) 20:20, 1 March 2025 (UTC)
Featured article review for Redshift
I have nominated Redshift for a featured article review here. Please join the discussion on whether this article meets the featured article criteria. Articles are typically reviewed for two weeks. If substantial concerns are not addressed during the review period, the article will be moved to the Featured Article Removal Candidates list for a further period, where editors may declare "Keep" or "Delist" in regards to the article's featured status. The instructions for the review process are here. Hog Farm talk 04:18, 5 March 2025 (UTC)
AI echo chamber
Out of curiosity I tried Google's Gemini on the topic I worked on yesterday, Mott scattering. I was trying to see if it would give me sources. The first few queries gave me wonderfully formatted summaries of general information on Mott scattering but I was only able to coax one not-very-interesting source (literally "an example" as I asked for ;-). Then I tried
- Can you give me a bibliography of physics sources about Mott scattering?
Boom: I got the sources I added yesterday! Of course I thought these were great sources ;-) Johnjbarton (talk) 19:54, 1 March 2025 (UTC)
- I have used the free ChatGPT4. I have found that it often literally fabricates 'references' to support what it says. I mean literally fabricates. On being pressed, it will admit that it has done so. It seems to have no shame about it.Chjoaygame (talk) 01:43, 3 March 2025 (UTC)
- I would be deeply troubled if such an entity did have shame. Remsense ‥ 论 02:13, 3 March 2025 (UTC)
- A paper that I read (sorry, lost the reference) pointed out that AI basically produces an "intuitive" output without going through any logical process since there is pressure to produce an answer rapidly, and then, when pressed, will rationalize what was presented, to the extent of extreme fabrication (they'd designed a simple experiment to demonstrate the basics of this conclusion). One needs to check everything produced by an AI in exhaustive detail to avoid including plausible-sounding nonsense, something that too few people seem to have realized. —Quondum 18:00, 3 March 2025 (UTC)
- That's valuable comment. It explains things a bit. To deal with it, one strategy is to very carefully lead the AI in the direction you are interested in, taking care not to let it commit itself to things that are quite likely to turn out to be parrotted orthodoxy.Chjoaygame (talk) 03:37, 4 March 2025 (UTC)
- I don't see how that strategy would work, short of erasing the session after each answer. Every time it gives an answer, it derives it heuristically and commits itself. To give you an idea, the experiment was along the lines of asking whether a number was prime. If it was composite and it said 'yes', it would then more often produce a plausible-looking factorization when asked to factor that number compared to if it had not first been asked about the compositeness, when it might say that it was prime. (Disclaimer: I am probably misreporting the detail, but my basic observation is that we tend to trust AI output far more than it is rational to do.) —Quondum 14:33, 4 March 2025 (UTC)
- You are right. It isn't a reliable strategy, though it might work sometimes to some extent. I don't trust the AI that I used one bit. Nevertheless, it can sometimes be useful.Chjoaygame (talk) 02:46, 5 March 2025 (UTC)
- I would, for example, ask for a list of references that show a point, then check by retrieving the references and seeing that they do support the point and meet the criteria before using them in WP. That way, I do not need to trust the AI, but it has done a lot of the legwork for me. (And no, I haven't tried this, but I suspect that this is exactly what Johnjbarton did when mentioning it above.) —Quondum 16:06, 5 March 2025 (UTC)
- I followed a sequence of 15 times asking for a correction for a particular reference. The AI explicitly admitted that each was wrong. All referred to fabricated non-existent journal articles. The principle seemed to be 'any lie will do'.Chjoaygame (talk) 22:31, 5 March 2025 (UTC)
- One claim I've seen several times is that LLMs are essentially just a fancy version of predictive text. I'm not expert enough in the area to evaluate that claim. But if you think of them that way, then coming up with a reference makes sense, because that's what you often see in that sort of textual context, and it might not be very related to whether the reference actually exists. --Trovatore (talk) 02:54, 6 March 2025 (UTC)
- I suppose AI will be like any other new tool: it will work best after we learn how to use it. Unfortunately it does not seem like it be useful for finding good sources. The system does not seem to have actual reference information. The fabricated journal thing is clearly fail: the tool can't figure out that you want real-world information, its just filling in a pattern it has learned for what a reference might look like. In my case it seems to have chosen to "plagiarize" wikipedia. The latest Google Gemini "Flash 2.0" is adding links to online sources FWIW. Johnjbarton (talk) 02:55, 6 March 2025 (UTC)
- I followed a sequence of 15 times asking for a correction for a particular reference. The AI explicitly admitted that each was wrong. All referred to fabricated non-existent journal articles. The principle seemed to be 'any lie will do'.Chjoaygame (talk) 22:31, 5 March 2025 (UTC)
- I would, for example, ask for a list of references that show a point, then check by retrieving the references and seeing that they do support the point and meet the criteria before using them in WP. That way, I do not need to trust the AI, but it has done a lot of the legwork for me. (And no, I haven't tried this, but I suspect that this is exactly what Johnjbarton did when mentioning it above.) —Quondum 16:06, 5 March 2025 (UTC)
- You are right. It isn't a reliable strategy, though it might work sometimes to some extent. I don't trust the AI that I used one bit. Nevertheless, it can sometimes be useful.Chjoaygame (talk) 02:46, 5 March 2025 (UTC)
- I don't see how that strategy would work, short of erasing the session after each answer. Every time it gives an answer, it derives it heuristically and commits itself. To give you an idea, the experiment was along the lines of asking whether a number was prime. If it was composite and it said 'yes', it would then more often produce a plausible-looking factorization when asked to factor that number compared to if it had not first been asked about the compositeness, when it might say that it was prime. (Disclaimer: I am probably misreporting the detail, but my basic observation is that we tend to trust AI output far more than it is rational to do.) —Quondum 14:33, 4 March 2025 (UTC)
- That's valuable comment. It explains things a bit. To deal with it, one strategy is to very carefully lead the AI in the direction you are interested in, taking care not to let it commit itself to things that are quite likely to turn out to be parrotted orthodoxy.Chjoaygame (talk) 03:37, 4 March 2025 (UTC)
- A paper that I read (sorry, lost the reference) pointed out that AI basically produces an "intuitive" output without going through any logical process since there is pressure to produce an answer rapidly, and then, when pressed, will rationalize what was presented, to the extent of extreme fabrication (they'd designed a simple experiment to demonstrate the basics of this conclusion). One needs to check everything produced by an AI in exhaustive detail to avoid including plausible-sounding nonsense, something that too few people seem to have realized. —Quondum 18:00, 3 March 2025 (UTC)
- I would be deeply troubled if such an entity did have shame. Remsense ‥ 论 02:13, 3 March 2025 (UTC)
Terrell rotation
Terrell rotation has apparently been experimentally observed for the first time. Might be a good candidate to uplift and run by DYK if anyone is interested! TheDragonFire (talk) 17:49, 7 March 2025 (UTC)
Adiabatic connection fluctuation dissipation theorem (& OEP).
An editor recently added Adiabatic connection fluctuation dissipation theorem to Template:Electronic structure methods. I cannot defend this as a "method" in general use; a page is OK, but not as a practical method comparable to, for instance, pseudopotentials. Comments?
While I am raising this, I personally am OK with OEP being included although it is also not (yet) in common use.
N.B., the page Adiabatic connection fluctuation dissipation theorem needs some general work if anyone has the energy. Ldm1954 (talk) 14:17, 8 March 2025 (UTC)