2 stories
·
0 followers

Namesakes

1 Share
https://commons.wikimedia.org/wiki/File:Roque_de_Bonanza_1.jpg
Image: Wikimedia Commons

The color fuchsia is named after the flower of that name, which was named after 16th-century German botanist Leonhart Fuchs. And Fuchs is German for fox. So the color is named after a plant named after a man named after an animal.

The color orange is named after the fruit, rather than the other way around.

Canaries are named after the Canary Islands, rather than the other way around.

04/20/2024 UPDATE: The Canary Islands in turn derive their name from the Latin name Canariae Insulae, “islands of the dogs.” Pliny the Elder records that the islands contained “vast multitudes of dogs of very large size.” So animal -> islands -> bird. (Thanks, Bob, Randy, Derek, and Sam.)

And chartreuse is named after the French liqueur of that color, which is named after the Grand Chartreuse order of monks that created it in the eponymous Chartreuse Mountains. Mountains -> monastery -> beverage -> color. (Thanks, John.)

Read the whole story
garren
8 days ago
reply
Share this story
Delete

Meta.ai Oh My!

1 Comment and 3 Shares

“Meet Your New Assistant” says the announcement, going on with “Built With Llama 3”. And oh my goodness has it ever got a lot of coverage. So I thought I might as well try it.

My first cut was a little unfair; I asked it about a subject on which I am unchallenged as the world’s leading expert: Tim Bray. (That’s probably overstating it: My wife is clearly in the running.)

So I asked meta.ai “What does Tim Bray think of Google?” Twice; once on my phone while first exploring the idea, and again later on my computer. Before I go on, I should remark that both user interfaces are first-rate: Friction-free and ahead of the play-with-AI crowd. Anyhow, here are both answers; it may be relevant that I was logged into my long-lived Facebook account:

meta.ai on Tim Bray and Google, take 1 meta.ai on Tim Bray and Google, take 2

The problem isn’t that these answers are really, really wrong (which they are). The problem is that they are terrifyingly plausible, and presented in a tone of serene confidence. For clarity:

  1. I am not a Computer Scientist. Words mean things.

  2. I worked for Google between March of 2010 and March of 2014.

  3. I was never a VP there nor did I ever have “Engineer” in my title.

  4. I did not write a blog post entitled “Goodbye, Google”. My exit post, Leaving Google, did not discuss advertising nor Google’s activities in China, nor in fact was it critical of anything about Google except for its choice of headquarters location. In fact, my disillusionment with Google (to be honest, with Big Tech generally) was slow to set in and really didn’t reach critical mass until these troubling Twenties.

  5. The phrase “advertising-based business model”, presented in quotes, does not appear in this blog. Quotation marks have meaning.

  6. My views are not, nor have they been, “complex and multifaceted”. I am embarrassingly mainstream. I shared the mainstream enchantment with the glamor of Big Tech until, sometime around 2020, I started sharing the mainstream disgruntlement.

  7. I can neither recall nor find instances of me criticizing Google’s decision-making process, nor praising its Open-Source activities.

What troubles me is that all of the actions and opinions attributed to meta.ai’s version of Tim Bray are things that I might well have done or said. But I didn’t.

This is not a criticism of Meta; their claims about the size and sophistication of their Llama3 model seem believable and, as I said, the interface is nifty.

Is it fair for me to criticize this particular product offering based on a single example? Well, first impressions are important. But for what it’s worth, I peppered it with a bunch of other general questions and the pattern repeats: Plausible narratives containing egregious factual errors.

I guess there’s no new news here; we already knew that LLMs are good at generating plausible-sounding narratives which are wrong. It comes back to what I discussed under the heading of “Meaning”. Still waiting for progress.

The nice thing about science is that it routinely features “error bars” on its graphs, showing both the finding and the degree of confidence in its accuracy.

AI/ML products in general don’t have them.

I don’t see how it’s sane or safe to rely on a technology that doesn’t have error bars.

Read the whole story
garren
11 days ago
reply
Share this story
Delete
1 public comment
brennen
13 days ago
reply
It has been [0] days since I had a conversation with someone convinced we would soon be able to turn over such fallible human projects as "government" and "the operation of the economy" to AI.
Boulder, CO