Physics & Maths items

I want to keep the tradition alive that this is an ‘interdisciplinary’ blog, i.e. to post about stuff I don’t understand, i.e. physics and math.

The first item is an interesting ongoing real life experiment in the sociology of science. 8 years ago, Shinichi Mochizuki claimed to have proven the abc-conjecture (I henceforth refuse for the indefinite future to be shamed by mathematicians for unimaginative technical terms in linguistics). Apparently, this is a conjecture about a deep and unexpected relationship between addition and multiplication.

To achieve that, Mochizuki developed a whole theory of his own, Interuniversal Teichmüller theory (again…), dropped like 1500 pages of impenetrable, idiosyncratic notation on the arXiv and left it at that. He refuses to give talks on it or hold lectures outside of Japan and leaves it to his colleagues to try to explain this to other mathematicians. In these 8 years, nobody was able to verify the proof. Granted, a lot of people simply didn’t try because the volume of necessary reading was way too much and because they couldn’t follow the style of presentation. Then, Jakob Stix and Peter Scholze (of Fields Medal fame) worked through the material, found an alleged gap in the proof and a week-long meeting with Mochizuki in Japan couldn’t remove their doubt (portrayed in this Quanta article).

Now, the math journal of Mochizuki’s institute (of which he is an editor) decided to publish his proof, a weird choice given that this usually means that the proof has been vetted and verified in the peer review process – while at the same time, most experts in the field can’t follow the logic of the proof.

Peter Scholze also commented on the current situation on Peter Woit’s blog, with an ongoing discussion with people who claim to understand the proof.

The last bits are from Scott Aaronson’s answers in his post “AMA: Apocalypse Edition“: here are his thoughts on the It from Qubit idea in fundamental physics (the whole spacetime emerges from quantum entanglement business) and whether undecidability/uncomputability are relevant for physics.

Finally, the editor’s choice for interesting article today appears in Quanta about progress in the Langlands program (a set of conjectures relating vastly different fields in maths to each other in deep and surprising ways).

 

 

Posted in Physics | Leave a comment

If only we could identify patterns in humans’ cognitive behaviour…

A while ago there was an article in Quantamagazine about neuroscience, “To Decode the Brain, Scientists Automate the Study of Behaviour“. It’s main gist is that scientists used machine learning to classify patterns in the behaviour of animals, or as the subtitle puts it to “capture and analyze the “language” of animal behavior”.

This quote puts it quite succinctly:

By studying animals’ behaviors more rigorously and quantitatively, researchers are hoping for deeper insights into the unobservable “drives,” or internal states, responsible for them. “We don’t know the possible states an animal can even be in,” wrote Adam Calhoun […]

Tracing those internal states back to specific activity in the brain’s complex neural circuitry presents a further hurdle. Although sophisticated tools can record from thousands of neurons at once, “we don’t understand the output of the brain,” Datta said. “Making sense of these dense neural codes is going to require access to a richer understanding of behavior.”

The article then goes on to describe how modern technology like motion tracking revolutionized the quantitative study of the behavior of animals by letting scientists track, collect and analyze movement patterns and so on.

The article then segues into deeper questions:

Because pose-tracking software has simplified data collection, “now we can think about other problems,” said Benjamin de Bivort, a behavioral biologist at Harvard University. Starting with: How do we define the building blocks of behavior, and how do we interpret them? […]

The zoologist Ilan Golani at Tel Aviv University has spent much of the past six decades in search of a less arbitrary way to describe and analyze behavior — one involving a fundamental unit of behavior akin to the atom in chemistry.

It goes on to describe a breakthrough that discovered minimal building blocks in the movements of mice.

The dynamics of the animals’ three-dimensional behavior seemed to segment naturally into small chunks that lasted for 300 milliseconds on average. “This is just in the data. I’m showing you raw data,” Datta said. “It’s just a fundamental feature of the mouse’s behavior.”

Those chunks, he thought, looked an awful lot like what you might expect a unit of behavior to look like — like syllables, strung together through a set of rules, or grammar.

This is of course only an analogy, a handy metaphor, because language is not behaviour.
Because of these advances “they’re starting to make the first connections to the brain and its internal states”.

Datta and his colleagues discovered that in the striatum, a brain region responsible for motor planning and other functions, different sets of neurons fire to represent the different syllables identified by MoSeq. So “we know that this grammar is directly regulated by the brain,” Datta said. “It’s not just an epiphenomenon, it’s an actual thing the brain controls.”

The article then ends with this inspirational passage

The scientists are careful to note that these techniques should enhance and complement traditional behavioral studies, not replace them. They also agree that much work needs to be done before core universal principles of behavior will start to emerge. Additional machine learning models will be needed, for example, to correlate the behavioral data with other complex types of information.

“This is very much a first step in terms of thinking about this problem,” Datta said. He has no doubt that “some kid is going to come up with a much better way of doing this.” Still, “what’s nice about this is that we’re getting away from the place where ethologists were, where people were arguing with each other and yelling at each other over whether my description is better than yours. Now we have a yardstick.”

“We are getting to a point where the methods are keeping up with our questions,” Murthy said. “That roadblock has just been lifted. So I think that the sky’s the limit. People can do what they want.”

Reading this article as a linguist leads to a mix of very diverse feelings. On one hand it is great to see these advances in the behaviour of animals and its links to neurological signals. On the other hand it is funny to see how the mere beginning of an understanding of patterns in behaviour is hailed as a breakthrough while a general classification is viewed as the holy grail that could bring cognitive science/neuroscience to unimaginable new depths.

We already do have a pretty advanced (comparatively) understanding of the behaviour (of a higher cognitive function even) of an animal: language in homo sapiens. We actually know of ‘syllables’ in the structure of language, and how they are strung together, almost as if we had an understanding of the ‘grammar’ of how language works. We are even so far that we can do theoretical debate about the structure of these patterns: what the actual minimal building blocks are or if some of them have to be broken up further, how these building blocks are related to each other, what the best framework is to talk about etc. That is, we are far beyond simply quantitatively collecting data. In this case it is even comparably easy to get access to these data.

For some reason, however, this does not make linguistics the most advanced cognitive science around in the eyes of the public, neighbouring fields or even some linguists themselves. Talking is not a cognitive function, is just stuff humans do, like filling out tax forms or knowing the rules of football. This is similar to the argument Norbert Hornstein has been making about the fact that if bee dances can earn you a Nobel prize in biology, linguists should be able to get them, too.

What’s even more sad, but on a different level, is the fact that even though we do have such a comparably deep understanding, this still doesn’t help us to achieve those Nobel prize-worthy discoveries in linking cognition to activity in the brain (which is the impression you get at the end of the article: “Just imagine what we could do now with the understanding of behaviour from in 50 years”).

So it’s still early days.

 

PS:

Before I forget, there have been some other articles with tangential links to linguistics in Quantamagazine in the last year:

One is about recent efforts and achievements in Natural Language Understanding, i.e. more from an engineering perspective than from a scientific one, but people like Tal Linzen and also colourful green ideas make an appearance.

The other is about discoveries about computational power inside single neurons, something that should make those happy who are fans of Gallistel & King & co.

 

 

Posted in Cognitive Science, Linguistics, Neuroscience | Leave a comment

Unrecently in the linguistic blogosphere

I stumbled upon a very interesting debate about opacity on two blogs that are not active anymore: Mr. Verb and phonoloblog (now Phonolist which is unfortunately not really a blog).

I love big architectural debates about core issues about grammar, especially if that is combined with theory comparison. Therefore, the opacity debate in phonology is the perfect battleground for heated discussions on blogs. Best to start here, here and here.

Posted in Linguistics | Tagged , | Leave a comment

Recently in physics & the linguistic blogosphere

While everyone has to stay at home, here are some good reads from the linguistic blogosphere to dispel boredom:

José-Luis Mendívil wrote an open letter to Martin Haspelmath on his blog Philosophy of Linguistics about innateness and building blocks of grammar. And here is the reply on Diversity Linguistics Comment.

Faculty of Language had its first blog post this year by Charles Yang, about recursion.

Another more philosophically minded post can be found over at Dan Milway’s blog on what kind of a science generative syntax is.

Absolutely not recent but linguistics was somewhat in the news as pressing questions about Baby Yoda’s language acquisition arose in the public (this ultimately justifies why tax money is used to pay people like David Adger).
However, Chomsky, reaching the absolute low point of his political and linguistic career, claims to never have heard of Baby Yoda. What’s more, he has “no thoughts on memes“. It seems the “responsibility of intellectuals to speak truth” is out of the window.

Finally, a new straw to cling to for fundamental physicists appeared at the LHC: it seems that certain anomalies in the decays of B-mesons persist although the statistical significance is still only at three something sigma.
As a non-physicist, let me try to explain this with the caveat that I have a kindergarten understanding of physics and what I say is probably incomplete or wrong: B-mesons are composite particles, consisting of two quarks (that’s what a meson is). The decay of B-mesons is particularly sensitive to the presence of potentially new particles outside of the Standard Model. For example, if the SM predicts that the decay of a B-meson into some particles X and Y should happen so and so often (say, 1 in a million) but you actually find that decay to happen more frequently than predicted you know that the SM is incomplete and even have some hints as to what kind of particle should be responsible for that (although there are several options and actual detection would require higher energies than currently achievable at the LHC). That said, it’s not a 5sigma discovery, and legions of 3sigma anomalies have vanished after further scrutiny.

Posted in Linguistics, Physics | Leave a comment

Alec Marantz’s blog

Thanks to Omer’s blogroll I learned of a blog that I didn’t know before. Turns out, Alec Marantz blogs (mostly) about morphology on his MorphLab website. You see, this is exactly why I suggested that more people should add blogrolls to their sites! This makes it much easier for newcomers to have access to all the great material out there in the blogosphere. Or as Omer puts it, this is “just plain good online citizenship”.

As you can imagine, the posts over there have highly informed takes on morphology, especially Distributed Morphology. See e.g. this post on architectural questions in morphology and different senses of the word ‘lexicalist’.

Add this blog to your feed – and your blogroll!

Posted in Linguistics, Online resources linguistics, Uncategorized | Tagged | Leave a comment

2019

2019 is nearing its end so I thought this is a good reason to look back on what this year has brought us with regard to all things physics, linguistics and sci-comm.

Quanta magazine has, as usual, nice overviews of everything that happened in physics, math & computer science and biology.

The year in fundamental physics was … mixed. Seeing an image of a black hole surely was amazing (but knowing how heavily processed it is takes away a bit of the magic, see an amazing series of blog posts by Prof. Matt Strassler here). On the other hand, it only confirmed a theory that we already know is right for the umpteenth time.

The problems that could not be denied anymore after the LHC turned up nothing now really begin to show. There are no really new exciting directions while at the same time it is clear that the old ones are problematic and potentially fruitless (these still continue though because of inertia).

There are no experimental clues for a new research direction. Physicists were debating whether to build a new, bigger collider is the best way to go, with Sabine Hossenfelder being a critical voice in this regard. Such a collider, however, is several decades away even if it was financed today. But to hope for supersymmetry to show up at 100TeV is at this point delusional.

String theorists are debating where to take their field, if string theory is actually a theory of the real world or a only a useful tool towards a final theory. While the actual equations of the conjectured M-theory are still at large, others (i.e. Vafa et al.) are trying to push the Swampland program in the hope of delimiting possible theories of quantum gravity.

The most demoralizing thing this year was the hype around the many worlds interpretation of quantum mechanics, amplified by certain publications, that let this topic be perceived way more relevant than it actually is (an interpretation, not a theory) while probably (even if not intended) further undermining public trust in science (‘everyone can have their interpretation of theories’)

However, an extremely awe-inspiring feat of engineering happened this year as we witnessed the moment of quantum supremacy.

One area in fundamental physics that is not completely cut off from experiment is the issue surrounding the growing tension in the different measurements of the expansion rate of the universe. This is probably the best hope for new physics that we currently have (apart from some neutrino anomalies and weak signs in B-physics).

 

As for linguistics, there is not much to tell this year. This has several reasons. For reasons unknown to me, linguistics is not as big in the media as physics and news stories about it never center on theoretical linguistics but stupid nonsense. Another reason is that I just started my PhD and I feel like I know almost nothing about the state of the art of linguistics.

For me, there are two highlights this year with regard to linguistics.

One is that Omer Preminger is now blogging quite regularly.  His posts are often quite insightful (although some of it is still above my paygrade) and lead to interesting discussions in the comment section. More often than not, this is about big architectural questions about grammar since he is on his quest to meaningfully structure (see what I did there?) syntax and semantics.
So everyone should go over there and compliment, woo, seduce, serenade and persuade him to write even more in the next year!

The other absolutely amazing thing that blessed online linguistics this year is the blog outdex by Thomas Graf et al. It fills the gap of a computational linguistics perspective on things which is weird, strange, iconoclastic, eye-opening and fascinating. He writes very clearly and down to earth, even I can understand that stuff (most of the time). I can hardly link to a specific post because most of them are actually really good and worth your time.

For these reasons, the above the blogs appear at the top of my blogroll (the highest honour any mortal linguist can attain).

This year also saw the most successful blog post in the history of particlelinguistics. While the usual average visitor number is maybe 2 per week, my overview of blogs in linguistics had almost 500 views in June this year. More people have read this post (than I have (… sorry, couldn’t resist)) than people will ever read any scientific publication of mine. Not sure how that makes me feel.

I still hope that at some point in time I can turn this blog post into what I originally intended it to be: a source of entertaining and insightful accounts of what theoretical linguistics is about, the things we try to find out and the big questions that have to be answered. But I still feel that I don’t know enough to blog confidently about all these issues. Maybe 2020 will be the year!

That said, happy new year 2020!

Posted in Linguistics, Physics | 2 Comments

Random links

It’s time for a linkfest again.

We start off with some philosophy of linguistics on The Brains Blog where David Pereplyotchik talks about his book Psychosyntax. He discusses three different philosophical approaches towards linguistics, cognitivism, platonism and nominalism.

To get even more dusty, here is some history:
The MPI for Psycholinguistics (the only one of its kind) has a nice webpage (in general but specifically) about its history. There is also a short film by the same institute about ‘A celebration of language‘.

There is also the history of GLOW, with its manifesto, as well as a critical reply.

An interesting text on the website for MIT gradadmissions (linguistics is like physics ) trys to explain what linguistics actually is to outsiders. Maybe this post doesn’t come too late for you trying to make your family understand that as a linguist you don’t participate in excavations.

Talking Brains finally had a non-PhD position related post, What is Cognition?.

I found a blog post by Andrew Gelman about Chomsky, describing his sociological situation as follows: “Chomsky seems to be surrounded mostly by admirers or his haters. The admirers give no useful feedback, and the haters are so clearly against him that he can ignore them. As with others in that situation, Chomsky can then make the convenient choice to ignore the critics who are non-admirers and non-haters. From an intellectual standpoint, those are the people who require the most work to interact with.”

There is an interesting map with the achievements of Cognitive Science by Anna Riedl. Linguistics is basically missing but it’s an interesting map nonetheless.

Replicated Typo has a very entertaining post about the computational nature of language, in the style of a dialogue a la Platon.

This tumblr has a witty sentence that I couldn’t agree with more:

“Morpheme is the Smallest Meaningful Unit in a Language“

is the linguistic equivalent of

“Mitochondria is the Powerhouse of the Cell”

Finally (and absolutely unrelated) there is this story about ultrafinitists in maths view the reality of numbers:

I have seen some ultrafinitists go so far as to challenge the existence of 2100 as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in 21, 22, 23,…, 2100, do we stop having “Platonic reality”? Here this “…” is totally innocent, in that it can easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with 21 and asked him if this was “real” or something to that effect. He virtually immediately said yes. Then I asked about 22 and he again said yes, but with a perceptible delay. Then 23 and yes, but with more delay. This continued for a couple more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take 2100 times as long to answer yes to 2100 as he would to answering 21. There is no way that I could get very far with this.

Harvey M. Friedman, “Philosophical Problems in Logic”

That’s it for now! Happy holidays!

Posted in Linguistics | Tagged | Leave a comment

phantasia[i] now with RSS

For those of you who are interested in good linguistics blogs, phantasia[i] now has RSS (use this link). Nothing should stop you now from being constantly up to date in the linguistics bloggosphere.

Posted in Linguistics, Online resources linguistics | Tagged | Leave a comment

More linguistics links

News on the linguistics blogs front:

Heidi Harley has a new post (after 3 years 😉 ) about a light bulb moment about Hiaki echo vowels where she accessibly explains an idea she had about a problem she encountered.

Dan Milway continues his discussion of Jerrold Katz’s 1972 Semantic Theory after a hiatus probably caused by finishing his PhD (congrats!).

Chris Collins discusses different types of data in linguistic research and the EPP.

Thomas Graf’s series on subregular complexity in phonology finally arrived at syntax with takes on Merge and Move, islands and more islands. His writing is very clear and understandable, even for a computational amateur like myself. Simply read everything on this amazing blog!

Last but not least is Omer with some anecdotes about the occasional value of imprecise recall where your brain sometimes generously autocorrects your memory of, say, a principle or a generalization into the version that turns out to be more  helpful for your research.

 

On the ‘linguistics in popular press’ front we have an article about the Bender Rule (of hashtag fame), i.e. the rule that you should always mention the language that you are working on to avoid the subconscious equation “Natural Language = English”.

There’s a TEDtalk (the natural product of evolution if you introduce presentations into the ecological niche that is American culture) by Ed Gibson on how efficiency shapes human language.

Linguistics is in ArsTechnica with an MIT study that found that languages generally minimize dependency length.

Some people at MIT also enriched a recurrent neural net (RNN) with a grammar – it is almost as if this RNN had innate knowledge of language – and found out that it compares best in performance to models with little or no added grammar.

A crossover between linguistics and mathematics (in this case applied category theory) is as usual above my paygrade: dynamic syntax, grammars as parsers, context-freeness and monoidal categories or whatever all that means over on the n-Category café. So if you want to go down the rabbit hole of general abstract nonsense (not my term!), there you go!

Posted in Linguistics | Tagged | Leave a comment

Linguistics links

The Nautilus issue on Language has come to an end, and it’s… mixed.
After highlights by David Adger, Christof Koch discusses (at least at the end of his article) whether language is necessary for consciousness or not. Next is a very good article on a similar issue but with a focus on aphasia and cognitive abilities after its occurrence, by Anna Ivanova, a PhD-student at MIT.

Next is an article about the “removal of cultural emblems”. Not sure what that has to do with language. Then we have an article in the category “Not wrong but will definitely give readers a completely wrong impression what relevant and deep questions in linguistics are about, with the danger of leaving them with the hazy impression that Sapir-Whorf is right” (yup, I need a catchier name) on the expression of the colour ‘red’ in different languages. (This article in Scientific American is similar, linguists on Trump’s tweets…).

The issue ends with articles on freedom and mergers with AI, giving up any pretense of writing about language.

There have, however, been two language related posts on Nautilus’s blog, one is about the successes and failures of artificial language change, and another about language acquisition. The latter discusses a physicist’s attempt at finding a model for language acquisition, and of course it contains phase transitions (where are symmetries and harmonic oscillators?!?), also discussed here.

 

Some other links:

On the philosophy/sociology of science front there is an article in the Atlantic about the fact that apparently all existing research on the genetics of depression is bogus and doesn’t hold up under scrutiny. It’s aptly titled “A waste of 1000 research papers”.

Stacy McGaugh, great as usual, has a nuanced discussion of the necessity of falsifiability and its shortcomings.

There is a nice twitter thread on specialization in Cognitive Science.

There is a post on OUP’s blog by William Levelt on the history of psycholinguistics in the pre-Chomsky era.

Finally, there is a nice linguistics puzzle on Chris Collin’s blog

Posted in Linguistics | Tagged | Leave a comment