<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Technology &amp;mdash; notebook</title>
    <link>https://notesfromjason.writeas.com/tag:Technology</link>
    <description>[« From Jason](https://fromjason.xyz) &lt;span&gt;|&lt;/span&gt;  Free typos included. </description>
    <pubDate>Mon, 20 Apr 2026 02:04:51 +0000</pubDate>
    <item>
      <title>Apple is killing the cloud as we know it</title>
      <link>https://notesfromjason.writeas.com/apple-is-killing-the-cloud-as-we-know-it?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[A technology novice may know that iCloud syncs their documents across all their devices— iPhone, iPad, and Mac. Someone a little more tech-savvy may understand that the cloud, as a general concept, is also used to offload computing burdens like speech-to-text and virtual assistants. &#xA;&#xA;We all know that our data floating in the cloud is being kept, monitored, analyzed, and sold. Tell Alexa your hopes and dreams, how perhaps you are having trouble sleeping, and he&#39;ll blab to Jeff Bezos about it. Maybe Jeff uses that gossip to sell you a better mattress. &#xA;&#xA;Google Assistant and Cortana (RIP) are both blabbermouths, too. It&#39;s not unreasonable to think Siri rolls with the same crowds. But I&#39;m not so sure. Most of what you tell Siri never leaves your phone. Even for processing your voice, it&#39;s all done on your iDevice. &#xA;&#xA;Listen, I get it. We live in a capitalist world with faceless shareholders and petulant billionaires. Why would Apple be any different? Part of me knows that enough time passes and my foot will eventually reach my mouth. But right now, based on the information available, it&#39;s clear that Apple is killing the cloud as we know it. And it&#39;s doing so in favor of on-device computing and storage. &#xA;&#xA;Anything that floats in the air is encrypted and out of Tim Cook&#39;s reach. That&#39;s an incredible feat. It shows a path for tech companies to profit without soaking themselves in our data. Will Amazon or Google ever give up their data addiction? Likely not. But start-ups sprout every day.  &#xA;&#xA;Over the last few years, Apple has favored on-device computing and encryption:&#xA;&#xA;Siri processes on-device&#xA;So does dictation&#xA;And the Neural Network&#xA;Our files live on our devices first&#xA;All our stuff in iCloud is encrypted&#xA;&#xA;Plus, iPhones increase storage every year like clockwork. We can get 2TB mobile devices. That&#39;s wild. I mean, it&#39;s overpriced like hell, but it&#39;s still amazing. &#xA;&#xA;iCloud has essentially become a traffic guard that directs our digital lives from one iDevice to the next. And it never asks, &#34;How&#39;s your day, hot shot?&#34; That seems like a good thing. &#xA;&#xA;Note: This post is mainly rushed thoughts. I&#39;ll keep adding and polishing.&#xA;&#xA;---&#xA;Type: #Note&#xA;Re: #Apple #Technology&#xA;&#xA;---&#xA;&#xA;from Jason]]&gt;</description>
      <content:encoded><![CDATA[<p>A technology novice may know that iCloud syncs their documents across all their devices— iPhone, iPad, and Mac. Someone a little more tech-savvy may understand that the cloud, as a general concept, is also used to offload computing burdens like speech-to-text and virtual assistants.</p>

<p>We <em>all</em> know that our data floating in the cloud is being kept, monitored, analyzed, and sold. Tell Alexa your hopes and dreams, how perhaps you are having trouble sleeping, and he&#39;ll blab to Jeff Bezos about it. Maybe Jeff uses that gossip to sell you a better mattress.</p>

<p>Google Assistant and Cortana (RIP) are both blabbermouths, too. It&#39;s not unreasonable to think Siri rolls with the same crowds. But I&#39;m not so sure. Most of what you tell Siri never leaves your phone. Even for processing your voice, it&#39;s all done on your iDevice.</p>

<p>Listen, I get it. We live in a capitalist world with faceless shareholders and petulant billionaires. Why would Apple be any different? Part of me knows that enough time passes and my foot will eventually reach my mouth. But right now, based on the information available, it&#39;s clear that Apple is killing the cloud as we know it. And it&#39;s doing so in favor of on-device computing and storage.</p>

<p>Anything that floats in the air is encrypted and out of Tim Cook&#39;s reach. That&#39;s an incredible feat. It shows a path for tech companies to profit without soaking themselves in our data. Will Amazon or Google ever give up their data addiction? Likely not. But start-ups sprout every day.</p>

<p>Over the last few years, Apple has favored on-device computing and encryption:</p>
<ol><li>Siri processes on-device</li>
<li>So does dictation</li>
<li>And the Neural Network</li>
<li>Our files live on our devices <em>first</em></li>
<li>All our stuff in iCloud is encrypted</li></ol>

<p>Plus, iPhones increase storage every year like clockwork. We can get 2TB mobile devices. That&#39;s wild. I mean, it&#39;s overpriced like hell, but it&#39;s still amazing.</p>

<p>iCloud has essentially become a traffic guard that directs our digital lives from one iDevice to the next. And it never asks, “How&#39;s your day, hot shot?” That seems like a <em>good</em> thing.</p>

<p><em>Note: This post is mainly rushed thoughts. I&#39;ll keep adding and polishing.</em></p>

<hr/>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Apple" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Apple</span></a> <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a></p>

<hr/>

<p><img src="https://i.snap.as/Qni3emj2.png" alt="from Jason"/></p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/apple-is-killing-the-cloud-as-we-know-it</guid>
      <pubDate>Fri, 13 Oct 2023 22:12:46 +0000</pubDate>
    </item>
    <item>
      <title>Mind if I search your car?</title>
      <link>https://notesfromjason.writeas.com/mind-if-i-search-your-car?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Reuters reporting on Meta&#39;s AI chatbot and the dataset the company used to train it:&#xA;&#xA;  Meta also did not use private chats on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company&#39;s annual Connect conference this week. &#xA;&#xA;Emphasis mine.&#xA;&#xA;This is a neat little trick. A reasonable reader, or someone not completely cynical, may think the term &#34;private chat&#34; just means chats, which are private in nature. &#xA;&#xA;But Facebook / Meta doesn&#39;t believe chats are inherently private. Privacy is an opt-in feature on Messenger. You must explicitly switch on the end-to-end encryption. Only then will Meta agree to keep out of your user data. &#xA;&#xA;!--more--&#xA;&#xA;From Facebook&#39;s help center:&#xA;&#xA;  A secret conversation in Messenger is encrypted end to end, which means the messages are intended just for you and the other person - not anyone else, including us.&#xA;&#xA;So, when Nick Clegg, Meta&#39;s President of Global Affairs, goes on record to say the company&#39;s AI doesn&#39;t train on &#34;private chats,&#34; it reads like a benign statement. But, it&#39;s impossible to decipher how Clegg is using the term— as an adjective, or part of a noun with a precise technical meaning. &#xA;&#xA;It&#39;s possible that the term belongs to Reuters, as &#34;private chat&#34; isn&#39;t directly quoted. But I find that to be a weird liberty for a journalist to take in a published interview. &#xA;&#xA;Okay, I know it sounds like I&#39;m splitting hairs but, in five years when an exposé breaks, and Zuck is invited to another congressional hearing over privacy concerns, that phrasing gives him an out. &#xA;&#xA;Zuck can be like &#34;we didn&#39;t mean private, we meant Private™. Then some congressperson with a hundred grand in Meta stock can throw up their hands and be like &#34;who&#39;s to say, case closed.&#34;&#xA;&#xA;I know of at least one other situation where this type of wordplay occurs.  &#xA;&#xA;Ever been pulled over by a cop and they ask &#34;mind if I search your car?&#34; They specifically ask like this because you&#39;re likely to respond with &#34;yes&#34; or &#34;no.&#34; And because of the way the question is phrased, both potential answers can imply consent to search. &#xA;&#xA;&#34;Mind if I search your car?&#34;&#xA;&#34;No.&#34;&#xA;&#xA;You may&#39;ve meant &#34;no you can&#39;t search my car,&#34; but a cop can argue to a judge that they thought you meant &#34;no, I don&#39;t mind.&#34; The reverse is true with the answer yes. &#34;Yes I mind&#34; and &#34;yes I give you consent&#34; are both plausible interpretations. &#xA;&#xA;It&#39;s a neat little trick. &#xA;&#xA;---&#xA;&#xA;Type: #Note&#xA;Re: #Meta #Privacy #Technology&#xA;&#xA;---&#xA;from Jason notebook]]&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://www.reuters.com/technology/metas-new-ai-chatbot-trained-public-facebook-instagram-posts-2023-09-28/" rel="nofollow">Reuters reporting</a> on Meta&#39;s AI chatbot and the dataset the company used to train it:</p>

<blockquote><p>Meta also did not use <strong>private chats</strong> on its messaging services as training data for the model and took steps to filter private details from public datasets used for training, said Meta President of Global Affairs Nick Clegg, speaking on the sidelines of the company&#39;s annual Connect conference this week.</p></blockquote>

<p>Emphasis mine.</p>

<p>This is a neat little trick. A reasonable reader, or someone not completely cynical, may think the term “private chat” just means chats, which are private in nature.</p>

<p>But Facebook / Meta doesn&#39;t believe chats are inherently private. Privacy is an <em>opt-in</em> feature on Messenger. You must explicitly switch on the end-to-end encryption. Only then will Meta agree to keep out of your user data.</p>



<p>From Facebook&#39;s <a href="https://en-gb.facebook.com/help/messenger-app/811527538946901" rel="nofollow">help center</a>:</p>

<blockquote><p>A secret conversation in Messenger is encrypted end to end, which means the messages are intended just for you and the other person – not anyone else, including us.</p></blockquote>

<p>So, when Nick Clegg, Meta&#39;s President of Global Affairs, goes on record to say the company&#39;s AI doesn&#39;t train on “private chats,” it reads like a benign statement. But, it&#39;s impossible to decipher how Clegg is using the term— as an adjective, or part of a noun with a precise technical meaning.</p>

<p>It&#39;s possible that the term belongs to Reuters, as “private chat” isn&#39;t directly quoted. But I find that to be a weird liberty for a journalist to take in a published interview.</p>

<p>Okay, I know it sounds like I&#39;m splitting hairs but, in five years when an exposé breaks, and Zuck is invited to another congressional hearing over privacy concerns, that phrasing gives him an out.</p>

<p>Zuck can be like “we didn&#39;t mean <em>private</em>, we meant <strong>Private™</strong>. Then some congressperson with a hundred grand in Meta stock can throw up their hands and be like “who&#39;s to say, case closed.”</p>

<p>I know of at least one other situation where this type of wordplay occurs.</p>

<p>Ever been pulled over by a cop and they ask “mind if I search your car?” They specifically ask like this because you&#39;re likely to respond with “yes” or “no.” And because of the way the question is phrased, both potential answers can imply consent to search.</p>

<p><em>“Mind if I search your car?”</em>
<em>“No.”</em></p>

<p>You may&#39;ve meant “no you can&#39;t search my car,” but a cop can argue to a judge that they thought you meant “no, I don&#39;t mind.” The reverse is true with the answer yes. “Yes I mind” and “yes I give you consent” are both plausible interpretations.</p>

<p>It&#39;s a neat little trick.</p>

<hr/>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Meta" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Meta</span></a> <a href="https://notesfromjason.writeas.com/tag:Privacy" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a></p>

<hr/>

<p><img src="https://i.snap.as/z7VQsW9C.png" alt="from Jason notebook"/></p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/mind-if-i-search-your-car</guid>
      <pubDate>Tue, 03 Oct 2023 00:20:25 +0000</pubDate>
    </item>
    <item>
      <title>Just one more technology, bro, I promise bro</title>
      <link>https://notesfromjason.writeas.com/just-one-more-technology-bro-i-promise-bro?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[just one more technology and the world will be a better place, bro, for you and for me, bro.&#xA;&#xA;!--more--&#xA;&#xA;From How a startup full of ex-iPhone talent is trying to make phones obsolete:&#xA;&#xA;  Humane is trying to realize the promise of “ambient computing” — an artificial intelligence-driven computing experience that’s personal and contextual — by building a software platform and hardware line that doesn’t rely on screens.&#xA;&#xA;Go on...&#xA;&#xA;  Humane was founded by Bethany Bongiorno and Imran Chaudhri, ex-Apple employees who played major roles in the creation of both the iPhone and iPad. &#xA;&#xA;Two ex-Apple employees who want to make a (presumably) better AI assistant than Siri? Okay, I&#39;m listening...&#xA;&#xA;  At Humane, their stated goal is building technology that&#39;s “familiar, natural, and human,” betters the human experience, and is “born from good intentions.” The company believes “we all deserve more from technology,” &#xA;&#xA;Here it comes...&#xA;&#xA;  Chaudhri also stresses that Humane is focused on “trust and privacy from day zero.” You should have control over what your technology knows, “your data should be owned by you and only you.”&#xA;&#xA;Oh, fuck off lol. &#xA;&#xA;I&#39;m so tired of the &#34;better tomorrow&#34; promise from snazzy tech start-ups. These &#34;privacy from day zero&#34; promises cannot be kept in the longterm. Not if they want VC funding. Not if they want to go public. Not if they build their empire on techno-capitalism. &#xA;&#xA;But please, continue. &#xA;&#xA;  the company is building screen-free ambient computing hardware, and a platform for it to run on, possibly with Android as a starting point...&#xA;&#xA;El Em Motherfucking Aye Oh. &#xA;&#xA;Emphasis mine. &#xA;&#xA;---&#xA;&#xA;Type: #Note&#xA;Re: #Privacy #Siri #Technology &#xA;&#xA;---&#xA;from Jason notebook&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p><em>just <a href="https://x.com/MidTNDSA/status/1529594847459389440" rel="nofollow">one more</a> technology and the world will be a better place, bro, for you and for me, bro.</em></p>



<p>From <em><a href="https://www.inverse.com/tech/humane-projection-device-ex-apple-employees-artificial-intelligence" rel="nofollow">How a startup full of ex-iPhone talent is trying to make phones obsolete</a></em>:</p>

<blockquote><p>Humane is trying to realize the promise of “ambient computing” — an artificial intelligence-driven computing experience that’s personal and contextual — by building a software platform and hardware line that doesn’t rely on screens.</p></blockquote>

<p>Go on...</p>

<blockquote><p>Humane was founded by Bethany Bongiorno and Imran Chaudhri, ex-Apple employees who played major roles in the creation of both the iPhone and iPad.</p></blockquote>

<p>Two ex-Apple employees who want to make a (presumably) better AI assistant than Siri? Okay, <a href="https://notebook.fromjason.xyz/if-apple-doesnt-fix-siri-soon-im-out" rel="nofollow">I&#39;m listening</a>...</p>

<blockquote><p>At Humane, their stated goal is building technology that&#39;s “familiar, natural, and human,” betters the human experience, and is “born from good intentions.” The company believes “we all deserve more from technology,”</p></blockquote>

<p>Here it comes...</p>

<blockquote><p>Chaudhri also stresses that Humane is focused on “trust and privacy from day zero.” You should have control over what your technology knows, “your data should be owned by you and only you.”</p></blockquote>

<p>Oh, fuck off lol.</p>

<p>I&#39;m so tired of the “better tomorrow” promise from snazzy tech start-ups. These “privacy from day zero” promises <a href="https://en.wikipedia.org/wiki/Don%27t_be_evil" rel="nofollow">cannot be kept</a> in the longterm. Not if they want VC funding. Not if they want to go public. Not if they build their empire on techno-capitalism.</p>

<p>But please, continue.</p>

<blockquote><p>the company is building screen-free ambient computing hardware, and a platform for it to run on, possibly with <strong>Android</strong> as a starting point...</p></blockquote>

<p>El Em <em>Motherfucking</em> Aye Oh.</p>

<p>Emphasis mine.</p>

<hr/>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Privacy" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Privacy</span></a> <a href="https://notesfromjason.writeas.com/tag:Siri" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Siri</span></a> <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a></p>

<hr/>

<p><img src="https://i.snap.as/z7VQsW9C.png" alt="from Jason notebook"/></p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/just-one-more-technology-bro-i-promise-bro</guid>
      <pubDate>Sun, 01 Oct 2023 22:18:33 +0000</pubDate>
    </item>
    <item>
      <title>Siri can now read a web article but it lacks flair </title>
      <link>https://notesfromjason.writeas.com/siri-can-now-read-a-web-article-but-it-lacks-flair?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[iOS 17 brings a new feature I&#39;ve been waiting a long while to enjoy— Siri can now read aloud web articles right in Safari. &#xA;&#xA;I&#39;ve used the feature a dozen times so far, and while its my favorite new feature by far, the implementation leaves a lot to be desired. Siri isn&#39;t a particularly smart virtual assistant by any measure. So, it&#39;s no wonder that it struggles with all the variables of the written word. &#xA;&#xA;Type: #Note&#xA;Re: #iOS #Siri #Standardization  #Technology &#xA;&#xA;a href=&#34;https://remark.as/p/notesfromjason/siri-can-now-read-a-web-article-but-it-lacks-flair&#34;Discuss.../a&#xA;&#xA;!--more--&#xA;&#xA;Siri doesn&#39;t seem to consider context when choosing the pronunciation of word with multiple potential pronunciations. Apple&#39;s voice assistant also lacks a certain cadence in its delivery; a type of flair one may expect from, say, Stephen Fry reading The Hitchhiker&#39;s Guide to the Galaxy.&#xA;&#xA;Siri&#39;s shortcomings as a voice actor got me thinking-- shouldn&#39;t there be a markup standardization for this sort of thing? Some kind of metadata that communicates pronunciation, emphasis, etc. to AI readers?&#xA;&#xA;It turns out, such a standardization is in the works from the W3C. &#xA;&#xA;Specification for Spoken Presentation in HTML describes two approaches for markup attribution: multi-attribute and single-attribute.&#xA;&#xA;My realization that a standard is in the works (because of course it is, for accessibility reasons) brings up more questions. &#xA;&#xA;One, why hasn&#39;t Apple pushed for this, or even a proprietary public solution, for spoken presentation attributes? Or, have they and I&#39;m missing it? &#xA;&#xA;Two, is this type of attribution considered a matter of formatting? Because if so, is it possible to incorporate these attributes in Markdown? I&#39;d love to give my articles a little voice direction without diving into the markup, or without adding to WYSIWYG editors already clunky interfaces. But, perhaps its far too early to say.&#xA;&#xA;But damn, how great would it be to listen on demand to articles, essays, and books, as the author intended. And imagine if, once that ability is widely adopted, Stephen Fry sold his voice for AI reading instead of the technology companies screwing him out of work.&#xA;&#xA;What a pleasant experience that would be. &#xA;&#xA;---&#xA;Continuing down the rabbit hole&#xA;&#xA;Stuff I learned after posting this article:&#xA;&#xA;The W3C has Speech Synthesis Markup Language (SSML) which is an XML-based standard (like RSS?). &#xA;&#xA;Amazon uses SSML for Alexa. Microsoft uses it for Azure AI services. &#xA;&#xA;Interesting related links:&#xA;&#xA;https://popey.com/blog/2022/10/blog-to-speech-in-my-voice/&#xA;&#xA;http://library.usc.edu.ph/ACM/CHI2019/2exabs/alt08.pdf&#xA;&#xA;---&#xA;&#xA;Created: September 29, 2023&#xA;Future revisions?: Unsure&#xA;&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>iOS 17 brings a new feature I&#39;ve been waiting a long while to enjoy— Siri can now read aloud web articles right in Safari.</p>

<p>I&#39;ve used the feature a dozen times so far, and while its my favorite new feature by far, the implementation leaves a lot to be desired. Siri <a href="https://notebook.fromjason.xyz/if-apple-doesnt-fix-siri-soon-im-out" rel="nofollow">isn&#39;t a particularly smart</a> virtual assistant by any measure. So, it&#39;s no wonder that it struggles with all the variables of the written word.</p>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:iOS" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">iOS</span></a> <a href="https://notesfromjason.writeas.com/tag:Siri" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Siri</span></a> <a href="https://notesfromjason.writeas.com/tag:Standardization" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Standardization</span></a>  <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a></p>

<p><a href="https://remark.as/p/notesfromjason/siri-can-now-read-a-web-article-but-it-lacks-flair" rel="nofollow">Discuss...</a></p>



<p>Siri doesn&#39;t seem to consider context when choosing the pronunciation of word with multiple potential pronunciations. Apple&#39;s voice assistant also lacks a certain cadence in its delivery; a type of flair one may expect from, say, Stephen Fry reading <em>The Hitchhiker&#39;s Guide to the Galaxy</em>.</p>

<p>Siri&#39;s shortcomings as a voice actor got me thinking— shouldn&#39;t there be a markup standardization for this sort of thing? Some kind of metadata that communicates pronunciation, emphasis, etc. to AI readers?</p>

<p>It turns out, such a standardization is in the works from the W3C.</p>

<p><a href="https://www.w3.org/TR/spoken-html/" rel="nofollow">Specification for Spoken Presentation in HTML</a> describes two approaches for markup attribution: <strong>multi-attribute</strong> and <strong>single-attribute</strong>.</p>

<p>My realization that a standard is in the works (because of course it is, for accessibility reasons) brings up more questions.</p>

<p>One, why hasn&#39;t Apple pushed for this, or even a proprietary public solution, for spoken presentation attributes? Or, have they and I&#39;m missing it?</p>

<p>Two, is this type of attribution considered a matter of formatting? Because if so, is it possible to incorporate these attributes in Mark<em>down</em>? I&#39;d love to give my articles a little voice direction without diving into the markup, or without adding to WYSIWYG editors already clunky interfaces. But, perhaps its far too early to say.</p>

<p>But damn, how great would it be to listen on demand to articles, essays, and books, as the author intended. And imagine if, once that ability is widely adopted, Stephen Fry sold his voice for AI reading instead of the technology companies <a href="https://www.theguardian.com/technology/2023/sep/20/it-could-have-me-read-porn-stephen-fry-shocked-by-ai-cloning-of-his-voice-in-documentary" rel="nofollow">screwing him out of work</a>.</p>

<p>What a pleasant experience that would be.</p>

<hr/>

<h3 id="continuing-down-the-rabbit-hole" id="continuing-down-the-rabbit-hole">Continuing down the rabbit hole</h3>

<p>Stuff I learned after posting this article:</p>

<p>The W3C has <a href="https://www.w3.org/TR/speech-synthesis11/" rel="nofollow">Speech Synthesis Markup Language (SSML)</a> which is an XML-based standard (like RSS?).</p>

<p>Amazon <a href="https://developer.amazon.com/en-US/docs/alexa/custom-skills/speech-synthesis-markup-language-ssml-reference.html" rel="nofollow">uses</a> SSML for Alexa. Microsoft <a href="https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-synthesis-markup" rel="nofollow">uses</a> it for Azure AI services.</p>

<p>Interesting related links:</p>

<p><a href="https://popey.com/blog/2022/10/blog-to-speech-in-my-voice/" rel="nofollow">https://popey.com/blog/2022/10/blog-to-speech-in-my-voice/</a></p>

<p><a href="http://library.usc.edu.ph/ACM/CHI2019/2exabs/alt08.pdf" rel="nofollow">http://library.usc.edu.ph/ACM/CHI2019/2exabs/alt08.pdf</a></p>

<hr/>

<p>Created: September 29, 2023
Future revisions?: Unsure</p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/siri-can-now-read-a-web-article-but-it-lacks-flair</guid>
      <pubDate>Sat, 30 Sep 2023 00:00:28 +0000</pubDate>
    </item>
    <item>
      <title>He killed something beautiful</title>
      <link>https://notesfromjason.writeas.com/he-killed-something-beautiful?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[You know how when you push down on your feed, it refreshes? Twitter&#39;s design team invented that feature. They invented it; something so ubiquitous and elegant you don&#39;t even think about it.&#xA;&#xA;Type: #Note&#xA;Re: #Technology #Design &#xA;!--more--&#xA;&#xA;Twitter was one of the first social media companies to brand user interactions within the platform— tweets, retweets, hashtags— that was all Twitter&#39;s wildly inventive dev team.&#xA;&#xA;Twitter&#39;s user interface is so incredibly well-structured, with such a strong design language, that it feels as natural as the multi-touch screen it lives in. Companies, to this day, shamelessly copy everything Twitter does.&#xA;&#xA;When mobile websites started gaining popularity in the early 2010s, Twitter released Bootstrap, one of the first responsive front-end frameworks, for free. Today, millions of websites are built on Bootstrap because how well structured it is. I learned how to code using Bootstrap and I still use it often.&#xA;&#xA;Twitter has one of the best logos ever created. It’s iconic in its simplicity and thoughtfulness. It was one of the first to consider mobile interfaces, designed to be recognizable no matter the size.&#xA;&#xA;All this to say, to watch the brand fall victim to the antithesis of good design is gut-wrenching. I can&#39;t think of a worse death for the bird app. Elon bought something beautiful, and his instinct was to destroy it. It almost feels as if he&#39;s punishing it for achieving something he could never have achieved on his own— good taste and measured execution. You wonder if the only reason he hasn&#39;t destroyed Tesla is because he has shareholders to tell him no.&#xA;&#xA;I want Twitter to die. Not because of what it was, but because of what it has turned into. Twitter deserved a dignified death. Elon denied it that. He snuffed it out in the ugliest way possible.&#xA;&#xA;---&#xA;Creation: 7/25/2023&#xA;Last Evolution: --]]&gt;</description>
      <content:encoded><![CDATA[<p>You know how when you push down on your feed, it refreshes? Twitter&#39;s design team invented that feature. They <em>invented</em> it; something so ubiquitous and elegant you don&#39;t even think about it.</p>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a> <a href="https://notesfromjason.writeas.com/tag:Design" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Design</span></a>
</p>

<p>Twitter was one of the first social media companies to brand user interactions within the platform— tweets, retweets, hashtags— that was all Twitter&#39;s wildly inventive dev team.</p>

<p>Twitter&#39;s user interface is so incredibly well-structured, with such a strong design language, that it feels as natural as the multi-touch screen it lives in. Companies, to this day, shamelessly copy everything Twitter does.</p>

<p>When mobile websites started gaining popularity in the early 2010s, Twitter released Bootstrap, one of the first responsive front-end frameworks, for free. Today, millions of websites are built on Bootstrap because how well structured it is. I learned how to code using Bootstrap and I still use it often.</p>

<p>Twitter has one of the best logos ever created. It’s iconic in its simplicity and thoughtfulness. It was one of the first to consider mobile interfaces, designed to be recognizable no matter the size.</p>

<p>All this to say, to watch the brand fall victim to the antithesis of good design is gut-wrenching. I can&#39;t think of a worse death for the bird app. Elon bought something beautiful, and his instinct was to destroy it. It almost feels as if he&#39;s punishing it for achieving something he could never have achieved on his own— good taste and measured execution. You wonder if the only reason he hasn&#39;t destroyed Tesla is because he has shareholders to tell him no.</p>

<p>I want Twitter to die. Not because of what it was, but because of what it has turned into. Twitter deserved a dignified death. Elon denied it that. He snuffed it out in the ugliest way possible.</p>

<hr/>

<p>Creation: 7/25/2023
Last Evolution: —</p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/he-killed-something-beautiful</guid>
      <pubDate>Sun, 24 Sep 2023 00:42:13 +0000</pubDate>
    </item>
    <item>
      <title>Facebook created the blueprint for Cambridge Analytica</title>
      <link>https://notesfromjason.writeas.com/facebook-created-the-blueprint-for-cambridge-analytica?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[If you&#39;re like me, you may&#39;ve assumed that the Cambridge Analytica (C.A.) scandal was an HBO-Original-style hack. &#xA;&#xA;Type: #Essay&#xA;Re: #Politech #Technology&#xA;!--more--&#xA;&#xA;Watching Zuckerberg describe what happened, I pictured a shadowy man under a black cotton hood. Nineties techno blared over the deft clacking of a mechanical keyboard. &#xA;&#xA;I could almost taste the Monster Energy drink, lukewarm and long since stale. &#34;We’re in,&#34; he whispered as a waterfall of green gibberish fell down his dark Oakley sunglasses. If you&#39;re like me, you&#39;d be wrong about that imagery. They can&#39;t steal what was handed to them. &#xA;&#xA;Looking back, it almost feels intentional. From congressional hearings to a Netflix documentary, we heard language that implied our data was stolen. It wasn’t.&#xA;&#xA;  Facebook allowed a foreign company to steal private information. They allowed a foreign company to steal sensitive information from tens of millions of Americans.&#xA;—U.S. Senator Jon Tester (D) at the 2018 congressional hearing on Facebook&#39;s role in the Cambridge Analytica scandal.&#xA;&#xA;In reality, Cambridge Analytica used Facebook&#39;s open and available tools to harvest the personal data of 87 million Americans — door open, welcome sign lit. C.A. then used that data against us and exploited our most vulnerable neuroses without our knowledge or consent. On the other hand, Facebook not only knew this manipulation was possible, they literally wrote the book on it several years prior.&#xA;&#xA;In this article, I will explain how Facebook paved the way for Cambridge Analytica to successfully execute one of the most aggressive psychological operations in modern history. To my knowledge, these connections have not been made by media or congress. Why? Perhaps short memories and a poor understanding of the technology that runs our lives are to blame.&#xA;&#xA;  Behavioral Psychology + Big Data + Targeted Engagement = Behavior Change&#xA;— Cambridge Analytica pitch deck&#xA;&#xA;Image Memory Glasses&#xA;&#xA;There’s a scene towards the end of the first act of Donnie Darko that popped into my head while writing this article. Perhaps you remember it.&#xA;&#xA;Donnie Darko (Jake Gyllenhaal) and his girlfriend Gretchen (Jena Malone) stand in front of the classroom to present their imaginary invention called the Infant Memory Generator. In the scene, they describe a pair of glasses that could, in theory, display a slideshow of pleasant images to a sleeping baby.&#xA;&#xA;You can instantly feel the tension in the room. The teacher (Noah Wyle) is visibly upset by the idea. He asks his students whether they considered that a baby needs darkness to sleep. The two school bullies (Alex Greenwald and Seth Rogan) immediately raise their hands. “What if the parents put in pictures of satan?” one asked. “Or, like, dead people? Crap like that.”&#xA;&#xA;The implication here, realized by everyone in the room except Donnie Darko and Gretchen, is that their invention could have the power to affect a baby’s mood and behavior in unpredictable ways. In the wrong hands, such a device could be dangerous.&#xA;&#xA;Gretchen then replies to the bully, “Is that what you’d show your kids?”&#xA;&#xA;Donnie Darko Scene&#xA;&#xA;The dawn of emotional engineering&#xA;&#xA;In 2010, Facebook released a public study that showed off its ability to affect voter turnouts. It was a brazen admission considering how easy it was to pull off. It wasn’t anything that a graphic designer with access to our newsfeeds couldn’t achieve.&#xA;&#xA;Facebook injected a banner into the newsfeeds of a three subsets of Facebook users. The first group saw a banner with a pro-voting message, a link to find your nearest polling location, and the profile picture of friends who had already voted. The second group also saw a pro-voting banner, this time without the polling link or social encouragement. The third group (everyone else) saw no banner at all, just their normal newsfeeds.&#xA;&#xA;Facebook banner&#xA;&#xA;Here are the results, reported by The New Statesmen in their 2014 article: Facebook could decide an election without anyone ever finding out:&#xA;&#xA;  The researchers concluded that their Facebook graphic directly mobilized 60,000 voters and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day.&#xA;&#xA;For context, George W. Bush won Florida in 2000, thus the presidency, by a little over 500 votes. Donald Trump won by 80,000 votes in three states. Razor-thin margins in this country win presidential elections. Facebook, it seems, has the power to sway close democratic elections in whichever direction, at the flip of a digital switch, if it so choose.&#xA;&#xA;In 2012, Facebook set out to answer another question — can we alter peoples’ moods by changing what they see on their newsfeeds? The answer, revealed in a study released publicly two years later to major ethical concerns and mild internet outrage, was, yeah, you can. In fact, not only can you change a person’s mood, but that person could, through their own posts, affect the moods of their unwitting Facebook friends.&#xA;&#xA;The study also showed that users wrote longer posts after negative or positive content was injected into their Newsfeeds. The opposite was true when their feeds became closer to neutral — they wrote posts with fewer words and were less likely to affect the moods of their friends. In short, the study determined that if you can change a person’s mood, you can also change their behavior.&#xA;&#xA;  The holy grail of communications is when you can start to change behavior.&#xA;— Cambridge Analytica&#xA;&#xA;Facebook and Cambridge Analytica&#xA;&#xA;The two studies mentioned above acted as blueprints, of sorts, for manipulating Facebook’s own user base on a large scale, without any of the users’ consent. What’s worse, Mark Zuckerberg shared this information with the press, potential advertisers, and the public at large. Not as a warning, but as an accomplishment to be revered. Perhaps, his boasting would not have been such a big deal if Facebook guarded our personal data from outside organizations. But he didn’t. Instead, Zuckerberg did the opposite and provided app developers with a large, injudicious flow of precisely the type of data needed to recreate the scenarios outlined in the published studies.&#xA;&#xA;In 2010, Facebook announced a new API called Open Graph. (An API is a way for applications to summon specific pieces of data from other applications.) Facebook pitched Open Graph as a way for developers to implement Facebook features — things like commenting ability, the Facebook Like button, and Facebook Login — into third-party apps. Open Graph also gave app developers generous access to the treasure trove of user data Facebook had amassed over the years.&#xA;&#xA;Facebook Announces Open Graph&#xA;&#xA;One popular method for opening the data spigot was to use Facebook Login as a primary or exclusive method for signing into a third-party app. Cambridge Analytica used this exact method. It’s worth noting that the term “Open Graph” was not uttered once by a U.S. Senator during the Facebook Congressional hearings a few years ago.&#xA;&#xA;In 2018, when the Cambridge Analytica story broke, Zuckerberg called the incedent a “breach of trust.” Still, he gave no indication he cared about the collection of data itself or how Cambridge Analytica used that data to develop psychological profiles of Americans. None of that was against Facebook’s terms of service. The issue was that, technically, C.A. was an outside party, as they hired an academic to create the app, then later paid the academic for his efforts. And that is against the rules. But Zuckerberg doesn’t like to hold grudges. According to Christopher Wylie- famous whistleblower and former Cambridge Analytica employee- Facebook’s ad team, led by COO Sheryl Sandberg, helped C.A. develop their advertising campaigns a full year after Facebook knew of this “breach of trust.”&#xA;&#xA;Zuckerberg on CNN&#xA;&#xA;In order to group users by various psychological traits (then later serve ads that exploited those traits), Cambridge Analytica used a phycological model called OCEAN — Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.&#xA;&#xA;  C.A. claims that these models were at the heart of how they profiled you — your neuroses and other exploitable traits. &#xA;— New York Times&#xA;&#xA;In 2015, Cambridge Analytica created a psychological quiz and paid 300,000 Facebook users roughly $5 each to log into the app and complete the quiz. Since those users signed in using Facebook Login, CA was not only able to obtain the names, locations, and ‘like’ histories of the over quarter million participants, but their friends’ names, locations, and ‘like’ histories as well. Eighty-seven million friends, to be exact. Where did Cambridge Analytica get the idea for this psychological operation? Facebook.&#xA;&#xA;In 2012, Facebook released a third study: Private traits and attributes are predictable from digital records of human behavior. The study detailed a method for applying the OCEAN phycological model on Facebook users based on their ‘like’ history and, yes, results from a quiz.&#xA;&#xA;By its own accord, Cambridge Analytica was founded on the ability to harvest and profile user data. The studies conducted and shared publicly by Facebook seem to line up perfectly with Cambridge Analytica’s strategy for the 2016 Presidential election. Without those studies and without access to all that user data via the Open Graph, Cambridge Analytica simply does not exist.&#xA;&#xA;By its own accord, Cambridge Analytica was founded on the ability to harvest and profile user data. The studies conducted and shared publicly by Facebook seem to line up perfectly with Cambridge Analytica’s strategy for the 2016 Presidential election. Without those studies and without access to all that user data via the Open Graph, Cambridge Analytica simply does not exist.&#xA;&#xA;Steve Bannon: Vice President, Cambridge Analytica&#xA;&#xA;Our newsfeeds operate in darkness&#xA;&#xA;There’s a missing piece to this puzzle. You’d be hard-pressed to find a single advertisement made by Cambridge Analytica. How can that be? No one has seen an ad post-scandal because it ran as something called a dark ad, or dark post.&#xA;&#xA;Here’s a quote by Carole Cadwalladr from her 2019 Ted Talk titled Facebook’s role in Brexit — and the threat to democracy&#xA;&#xA;This entire referendum took place in darkness because it took place on Facebook. And what happens on Facebook stays on Facebook because only you see your news feed, and then it vanishes, so it’s impossible to research anything.&#xA;&#xA;Facebook has since created an open ad library in which anyone can see what ads are currently running. However, there is very little anyone can do to research what people saw on their newsfeeds during the months leading to the 2016 election in the absence of government pressure. I can’t stress enough how inept the U.S. Congress has been through all this and how little they’ve managed to hold Facebook accountable.&#xA;&#xA;Thankfully, the U.K. Parliament was able to subpoena a few of the Brexit ads developed by C.A. Another misconception is that C.A. has only meddled in the 2016 presidential election, but they’ve been accused of orchestrating disinformation campaigns in over 150 democratic elections worldwide.&#xA;&#xA;Here’s a quote by Carole Cadwalladr from her 2019 Ted Talk titled Facebook’s role in Brexit — and the threat to democracy:&#xA;&#xA;  This entire referendum took place in darkness because it took place on Facebook. And what happens on Facebook stays on Facebook because only you see your news feed, and then it vanishes, so it’s impossible to research anything.&#xA;&#xA;Facebook has since created an open ad library in which anyone can see what ads are currently running. However, there is very little anyone can do to research what people saw on their newsfeeds during the months leading to the 2016 election in the absence of government pressure. I can’t stress enough how inept the U.S. Congress has been through all this and how little they’ve managed to hold Facebook accountable.&#xA;&#xA;Thankfully, the U.K. Parliament was able to subpoena a few of the Brexit ads developed by C.A. Another misconception is that C.A. has only meddled in the 2016 presidential election, but they’ve been accused of orchestrating disinformation campaigns in over 150 democratic elections worldwide.&#xA;&#xA;There was never any indication that anyone even so much as considered the possibility of Turkey joining the E.U. Cambridge Analytica identified users prone to xenophobia, then targeted ads to them that incited fear. No one knew this was happening at the time because only Facebook ultimately knows what’s happening on our newsfeeds. Only we, as individuals, know what Facebook decides to put in front of our eyeballs.&#xA;&#xA;Are we really connected in the dark?&#xA;&#xA;Mark Zuckerberg and Sheryl Sandberg have long pushed the narrative of connecting Facebook users to the world, but they’ve done the opposite. Facebook has stripped us from the very thing that unites us as humans — our shared experiences and understanding of reality. We’ve been separated into psychological silos. Our worst fears, biases, and neuroses are collected and categorized, then fed back to us via our newsfeeds, closed groups, and dark posts.&#xA;&#xA;Our governments demand fact-checking for our newsfeeds by the same force amplifying the lies. And lost in all this is what we arguably need the most; the best fact checkers we have — our peers. We’ve been encouraged by Facebook to quietly unfollow any friction in our social circles, and in that process, we may lose a trusted friend’s voice of reason. We no longer see posts from known experts willing to contextualize a claim because they’ve been muted. Facebook provided us with the tools not to connect but to isolate. And we chip away at the people around us, the ones who matter, until all we have are messages designed to exploit our most vulnerable traits. All the while, we are oblivious to, and cannot opt out from, the psychological manipulation.&#xA;&#xA;What Facebook allowed Cambridge Analytica to do was weaponize it’s tools in a way that we now no longer agree on basic truths —  vaccinations prevent disease, the world is a sphere, Hilary Clinton is not drinking the blood of children as a fountain of youth.&#xA;&#xA;We lay in our beds before we sleep, and we stare at a screen, at our very own personal slideshow that prioritizes negativity and disinformation. There is no one around to see what we see. No one to help us reason away our fears.&#xA;&#xA;Mark Zuckerberg and Sheryl Sandburg continue to stand in front of the classroom, pitching us on new products they themselves don’t fully understand. If Facebook is willing and able to control us from a glowing rectangle one foot from our faces, just imagine what they can accomplish with a pair of Image Memory Glasses wrapped around our heads as we enter the metaverse.&#xA;&#xA;After the scandal broke in 2018, Cambridge Analytica closed its doors, only to reappear under the name Emerdata. Facebook has since changed its name to Meta, and Jake Gyllenhaal finds himself in the crosshairs of Taylor Swift fans.&#xA;&#xA;Donnie Darko Scene&#xA;&#xA;from jason&#xA;&#xA;---&#xA;&#xA;Created: October 21, 2021&#xA;Last Evolved: November 2, 2022]]&gt;</description>
      <content:encoded><![CDATA[<p>If you&#39;re like me, you may&#39;ve assumed that the Cambridge Analytica (C.A.) scandal was an HBO-Original-style hack.</p>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Essay" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Essay</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Politech" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Politech</span></a> <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a>
</p>

<p>Watching Zuckerberg describe what happened, I pictured a shadowy man under a black cotton hood. Nineties techno blared over the deft clacking of a mechanical keyboard.</p>

<p>I could almost taste the Monster Energy drink, lukewarm and long since stale. “We’re in,” he whispered as a waterfall of green gibberish fell down his dark Oakley sunglasses. If you&#39;re like me, you&#39;d be wrong about that imagery. They can&#39;t steal what was handed to them.</p>

<p>Looking back, it almost feels intentional. From <a href="https://www.youtube.com/watch?v=mZaec_mlq9M" rel="nofollow">congressional hearings</a> to a Netflix documentary, we heard language that implied our data was stolen. It wasn’t.</p>

<blockquote><p>Facebook allowed a foreign company to steal private information. They allowed a foreign company to steal sensitive information from tens of millions of Americans.
—<em>U.S. Senator Jon Tester (D) at the 2018 congressional hearing on Facebook&#39;s role in the Cambridge Analytica scandal.</em></p></blockquote>

<p>In reality, Cambridge Analytica used Facebook&#39;s open and available tools to harvest the personal data of 87 million Americans — door open, welcome sign lit. C.A. then used that data against us and exploited our most vulnerable neuroses without our knowledge or consent. On the other hand, Facebook not only knew this manipulation was possible, they literally wrote the book on it several years prior.</p>

<p>In this article, I will explain how Facebook paved the way for Cambridge Analytica to successfully execute one of the most aggressive psychological operations in modern history. To my knowledge, these connections have not been made by media or congress. Why? Perhaps short memories and a poor understanding of the technology that runs our lives are to blame.</p>

<blockquote><p>Behavioral Psychology + Big Data + Targeted Engagement = Behavior Change
— <em>Cambridge Analytica pitch deck</em></p></blockquote>

<h2 id="image-memory-glasses" id="image-memory-glasses">Image Memory Glasses</h2>

<p>There’s a scene towards the end of the first act of Donnie Darko that popped into my head while writing this article. Perhaps you remember it.</p>

<p>Donnie Darko (Jake Gyllenhaal) and his girlfriend Gretchen (Jena Malone) stand in front of the classroom to present their imaginary invention called the Infant Memory Generator. In the scene, they describe a pair of glasses that could, in theory, display a slideshow of pleasant images to a sleeping baby.</p>

<p>You can instantly feel the tension in the room. The teacher (Noah Wyle) is visibly upset by the idea. He asks his students whether they considered that a baby needs darkness to sleep. The two school bullies (Alex Greenwald and Seth Rogan) immediately raise their hands. “What if the parents put in pictures of satan?” one asked. “Or, like, dead people? Crap like that.”</p>

<p>The implication here, realized by everyone in the room except Donnie Darko and Gretchen, is that their invention could have the power to affect a baby’s mood and behavior in unpredictable ways. In the wrong hands, such a device could be dangerous.</p>

<p>Gretchen then replies to the bully, “Is that what you’d show your kids?”</p>

<p><img src="https://i.snap.as/pXWHQNQY.jpg" alt="Donnie Darko Scene"/></p>

<h2 id="the-dawn-of-emotional-engineering" id="the-dawn-of-emotional-engineering">The dawn of emotional engineering</h2>

<p>In 2010, Facebook released a public study that showed off its ability to affect voter turnouts. It was a brazen admission considering how easy it was to pull off. It wasn’t anything that a graphic designer with access to our newsfeeds couldn’t achieve.</p>

<p>Facebook injected a banner into the newsfeeds of a three subsets of Facebook users. The first group saw a banner with a pro-voting message, a link to find your nearest polling location, and the profile picture of friends who had already voted. The second group also saw a pro-voting banner, this time without the polling link or social encouragement. The third group (everyone else) saw no banner at all, just their normal newsfeeds.</p>

<p><img src="https://i.snap.as/3YHUmkXk.jpg" alt="Facebook banner"/></p>

<p>Here are the results, reported by The New Statesmen in their 2014 article: <a href="https://www.newstatesman.com/science-tech/2014/06/facebook-could-decide-election-without-anyone-ever-finding-out" rel="nofollow">Facebook could decide an election without anyone ever finding out</a>:</p>

<blockquote><p>The researchers concluded that their Facebook graphic directly mobilized 60,000 voters and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day.</p></blockquote>

<p>For context, George W. Bush won Florida in 2000, thus the presidency, by a little over 500 votes. Donald Trump won by 80,000 votes in three states. Razor-thin margins in this country win presidential elections. Facebook, it seems, has the power to sway close democratic elections in whichever direction, at the flip of a digital switch, if it so choose.</p>

<p>In 2012, Facebook set out to answer another question — can we alter peoples’ moods by changing what they see on their newsfeeds? The answer, <a href="https://www.pnas.org/content/111/24/8788" rel="nofollow">revealed in a study</a> released publicly two years later to major ethical concerns and mild internet outrage, was, yeah, you can. In fact, not only can you change a person’s mood, but that person could, through their own posts, affect the moods of their unwitting Facebook friends.</p>

<p>The study also showed that users wrote longer posts after negative or positive content was injected into their Newsfeeds. The opposite was true when their feeds became closer to neutral — they wrote posts with fewer words and were less likely to affect the moods of their friends. In short, the study determined that if you can change a person’s mood, you can also change their behavior.</p>

<blockquote><p>The holy grail of communications is when you can start to change behavior.
— <em>Cambridge Analytica</em></p></blockquote>

<h2 id="facebook-and-cambridge-analytica" id="facebook-and-cambridge-analytica">Facebook and Cambridge Analytica</h2>

<p>The two studies mentioned above acted as blueprints, of sorts, for manipulating Facebook’s own user base on a large scale, without any of the users’ consent. What’s worse, Mark Zuckerberg shared this information with the press, potential advertisers, and the public at large. Not as a warning, but as an accomplishment to be revered. Perhaps, his boasting would not have been such a big deal if Facebook guarded our personal data from outside organizations. But he didn’t. Instead, Zuckerberg did the opposite and provided app developers with a large, injudicious flow of precisely the type of data needed to recreate the scenarios outlined in the published studies.</p>

<p>In 2010, Facebook announced a new API called Open Graph. (An API is a way for applications to summon specific pieces of data from other applications.) Facebook pitched Open Graph as a way for developers to implement Facebook features — things like commenting ability, the Facebook Like button, and Facebook Login — into third-party apps. Open Graph also gave app developers generous access to the treasure trove of user data Facebook had amassed over the years.</p>

<p><img src="https://i.snap.as/TRCwsryU.jpg" alt="Facebook Announces Open Graph"/></p>

<p>One popular method for opening the data spigot was to use Facebook Login as a primary or exclusive method for signing into a third-party app. Cambridge Analytica used this exact method. It’s worth noting that the term “Open Graph” was <a href="https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/" rel="nofollow">not uttered once</a> by a U.S. Senator during the Facebook Congressional hearings a few years ago.</p>

<p>In 2018, when the Cambridge Analytica story broke, Zuckerberg called the incedent a “breach of trust.” Still, he gave no indication he cared about the collection of data itself or how Cambridge Analytica used that data to develop psychological profiles of Americans. None of that was against Facebook’s terms of service. The issue was that, technically, C.A. was an outside party, as they hired an academic to create the app, then later paid the academic for his efforts. And that is against the rules. But Zuckerberg doesn’t like to hold grudges. According to Christopher Wylie- famous whistleblower and former Cambridge Analytica employee- Facebook’s ad team, led by COO Sheryl Sandberg, helped C.A. develop their advertising campaigns a full year after Facebook knew of this “breach of trust.”</p>

<p><img src="https://i.snap.as/RBjgDZa2.jpg" alt="Zuckerberg on CNN"/></p>

<p>In order to group users by various psychological traits (then later serve ads that exploited those traits), Cambridge Analytica used a phycological model called OCEAN — <em>Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism</em>.</p>

<blockquote><p>C.A. claims that these models were at the heart of how they profiled you — your neuroses and other exploitable traits. 
— <a href="https://youtube.com/watch?v=mrnXv-g4yKU&amp;feature=share" rel="nofollow">New York Times</a></p></blockquote>

<p>In 2015, Cambridge Analytica created a psychological quiz and paid 300,000 Facebook users roughly $5 each to log into the app and complete the quiz. Since those users signed in using Facebook Login, CA was not only able to obtain the names, locations, and ‘like’ histories of the over quarter million participants, but their friends’ names, locations, and ‘like’ histories as well. Eighty-seven million friends, to be exact. Where did Cambridge Analytica get the idea for this psychological operation? Facebook.</p>

<p>In 2012, Facebook released a third study: <strong><a href="https://www.pnas.org/content/110/15/5802" rel="nofollow">Private traits and attributes are predictable from digital records of human behavior</a></strong>. The study detailed a method for applying the OCEAN phycological model on Facebook users based on their ‘like’ history and, yes, results from a quiz.</p>

<p>By its own accord, Cambridge Analytica was founded on the ability to harvest and profile user data. The studies conducted and shared publicly by Facebook seem to line up perfectly with Cambridge Analytica’s strategy for the 2016 Presidential election. Without those studies and without access to all that user data via the Open Graph, Cambridge Analytica simply does not exist.</p>

<p>By its own accord, Cambridge Analytica was founded on the ability to harvest and profile user data. The studies conducted and shared publicly by Facebook seem to line up perfectly with Cambridge Analytica’s strategy for the 2016 Presidential election. Without those studies and without access to all that user data via the Open Graph, Cambridge Analytica simply does not exist.</p>

<p><img src="https://i.snap.as/7s7wFJhU.jpg" alt="Steve Bannon: Vice President, Cambridge Analytica"/></p>

<h2 id="our-newsfeeds-operate-in-darkness" id="our-newsfeeds-operate-in-darkness">Our newsfeeds operate in darkness</h2>

<p>There’s a missing piece to this puzzle. You’d be hard-pressed to find a single advertisement made by Cambridge Analytica. How can that be? No one has seen an ad post-scandal because it ran as something called a dark ad, or <a href="https://www.theguardian.com/technology/2018/oct/16/facebook-dark-ads-british-political-groups" rel="nofollow">dark post</a>.</p>

<p>Here’s a quote by Carole Cadwalladr from her 2019 Ted Talk titled Facebook’s role in Brexit — and the threat to democracy</p>

<p>This entire referendum took place in darkness because it took place on Facebook. And what happens on Facebook stays on Facebook because only you see your news feed, and then it vanishes, so it’s impossible to research anything.</p>

<p>Facebook has since created an open ad library in which anyone can see what ads are currently running. However, there is very little anyone can do to research what people saw on their newsfeeds during the months leading to the 2016 election in the absence of government pressure. I can’t stress enough how inept the U.S. Congress has been through all this and how little they’ve managed to hold Facebook accountable.</p>

<p>Thankfully, the U.K. Parliament was able to subpoena a few of the Brexit ads developed by C.A. Another misconception is that C.A. has only meddled in the 2016 presidential election, but they’ve been accused of orchestrating disinformation campaigns in over 150 democratic elections worldwide.</p>

<p>Here’s a quote by Carole Cadwalladr from her 2019 Ted Talk titled <a href="https://www.ted.com/talks/carole_cadwalladr_facebook_s_role_in_brexit_and_the_threat_to_democracy" rel="nofollow">Facebook’s role in Brexit — and the threat to democracy</a>:</p>

<blockquote><p>This entire referendum took place in darkness because it took place on Facebook. And what happens on Facebook stays on Facebook because only you see your news feed, and then it vanishes, so it’s impossible to research anything.</p></blockquote>

<p>Facebook has since created an open ad library in which anyone can see what ads are currently running. However, there is very little anyone can do to research what people saw on their newsfeeds during the months leading to the 2016 election in the absence of government pressure. I can’t stress enough how inept the U.S. Congress has been through all this and how little they’ve managed to hold Facebook accountable.</p>

<p>Thankfully, the U.K. Parliament was able to subpoena a few of the Brexit ads developed by C.A. Another misconception is that C.A. has only meddled in the 2016 presidential election, but they’ve been accused of orchestrating disinformation campaigns in over 150 democratic elections worldwide.</p>

<p>There was never any indication that anyone even so much as considered the possibility of Turkey joining the E.U. Cambridge Analytica identified users prone to xenophobia, then targeted ads to them that incited fear. No one knew this was happening at the time because only Facebook ultimately knows what’s happening on our newsfeeds. Only we, as individuals, know what Facebook decides to put in front of our eyeballs.</p>

<h2 id="are-we-really-connected-in-the-dark" id="are-we-really-connected-in-the-dark">Are we really connected in the dark?</h2>

<p>Mark Zuckerberg and Sheryl Sandberg have long pushed the narrative of connecting Facebook users to the world, but they’ve done the opposite. Facebook has stripped us from the very thing that unites us as humans — our shared experiences and understanding of reality. We’ve been separated into psychological silos. Our worst fears, biases, and neuroses are collected and categorized, then fed back to us via our newsfeeds, closed groups, and dark posts.</p>

<p>Our governments demand fact-checking for our newsfeeds by the same force amplifying the lies. And lost in all this is what we arguably need the most; the best fact checkers we have — our peers. We’ve been encouraged by Facebook to quietly unfollow any friction in our social circles, and in that process, we may lose a trusted friend’s voice of reason. We no longer see posts from known experts willing to contextualize a claim because they’ve been muted. Facebook provided us with the tools not to connect but to isolate. And we chip away at the people around us, the ones who matter, until all we have are messages designed to exploit our most vulnerable traits. All the while, we are oblivious to, and cannot opt out from, the psychological manipulation.</p>

<p>What Facebook allowed Cambridge Analytica to do was weaponize it’s tools in a way that we now no longer agree on basic truths —  vaccinations prevent disease, the world is a sphere, Hilary Clinton is not drinking the blood of children as a fountain of youth.</p>

<p>We lay in our beds before we sleep, and we stare at a screen, at our very own personal slideshow that prioritizes negativity and disinformation. There is no one around to see what we see. No one to help us reason away our fears.</p>

<p>Mark Zuckerberg and Sheryl Sandburg continue to stand in front of the classroom, pitching us on new products they themselves don’t fully understand. If Facebook is willing and able to control us from a glowing rectangle one foot from our faces, just imagine what they can accomplish with a pair of Image Memory Glasses wrapped around our heads as we enter the metaverse.</p>

<p>After the scandal broke in 2018, Cambridge Analytica closed its doors, only to reappear under the name <a href="https://en.wikipedia.org/wiki/Emerdata_Limited" rel="nofollow">Emerdata</a>. Facebook has since changed its name to Meta, and Jake Gyllenhaal finds himself in the crosshairs of Taylor Swift fans.</p>

<p><img src="https://i.snap.as/Q9HYXiwE.jpg" alt="Donnie Darko Scene"/></p>

<p><img src="https://i.snap.as/Qni3emj2.png" alt="from jason"/></p>

<hr/>

<p>Created: October 21, 2021
Last Evolved: November 2, 2022</p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/facebook-created-the-blueprint-for-cambridge-analytica</guid>
      <pubDate>Fri, 22 Sep 2023 22:14:47 +0000</pubDate>
    </item>
    <item>
      <title>If Apple doesn&#39;t fix Siri soon, I&#39;m out.</title>
      <link>https://notesfromjason.writeas.com/if-apple-doesnt-fix-siri-soon-im-out?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[I have everything a person entrenched in Apple&#39;s ecosystem would have— iPhone, iPad, Apple Watch, MacBook Pro, etcetera, etcetera. &#xA;&#xA;Type: #Note&#xA;Re: #Design #Siri #Technology&#xA;!--more--&#xA;&#xA;I have a thousand purchased songs and 250 movies on iTunes. My AirPods are an extension of me during work hours (and after work if I&#39;m honest). &#xA;&#xA;Leaving Apple for another ecosystem would be a messy divorce in which I would lose everything. Yet, I yearn. I long. I flirt. Anytime I shout, &#34;Hey, Siri,&#34; I wonder what could be if my virtual assistant wasn&#39;t a fucking idiot.&#xA;&#xA;It didn&#39;t take long for Chat GPT to become an everyday tool. I use it to flush out concepts, for technical writing, and javascript coding. Hell, last night, I worked with the AI to create an ADHD playlist to help me find my focus when I work. It was a huge success.&#xA;&#xA;Today, Open AI keeps Chat GPT in a chatbot format. Unless you&#39;re Microsoft or one of Open AI&#39;s &#34;friends,&#34; you can&#39;t do much more with Chat GPT than text back and forth. The same goes for Google&#39;s Bard (or whatever they&#39;re doing now). This format will soon evolve, and AI will power our virtual voice assistants. Cortana and Google Assistant are poised to take a giant leap forward and be for us everything we hoped Siri would achieve by now.&#xA;&#xA;Siri is shit. Even by non-AI standards, Siri fucking sucks. Half the time, it doesn&#39;t respond. And when it does, it&#39;s a coin toss on whether it will actually help me with what I need. Siri is slow, unhelpful, and dumb; I can&#39;t take it anymore. I want Siri to answer my questions without sending me to the web. I want to write an article as I walk down the street and know what I&#39;m dictating is making it to the page. I want Siri to understand the context, remember conversations, and know when to reference them.&#xA;&#xA;I know what you&#39;re thinking— Jason is like that guy in the movie Her. He&#39;s going to fall in love with his virtual assistant. Let me tell you something. I am ready to break my heart if that&#39;s what it takes to get the virtual assistant we were all promised.&#xA;&#xA;I&#39;ll jump ship if Microsoft or Google can put a powerful, conversational AI in my ear before Apple. I&#39;ll be the green text bubble in everyone&#39;s group chat. Do you hear me, Tim Cook? I&#39;ll fucking do it, bro. So help me, God.&#xA;&#xA;I don&#39;t want three thousand dollar virtual reality goggles. I don&#39;t want new spring colors for our iPhone cases (I do want that). I want you to fix Siri. Do it now. This is embarrassing. I hate it. Get your shit together.&#xA;&#xA;---&#xA;Created: April 5, 2023&#xA;Last Evolved: --&#xA;]]&gt;</description>
      <content:encoded><![CDATA[<p>I have everything a person entrenched in Apple&#39;s ecosystem would have— iPhone, iPad, Apple Watch, MacBook Pro, etcetera, etcetera.</p>

<p>Type: <a href="https://notesfromjason.writeas.com/tag:Note" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Note</span></a>
Re: <a href="https://notesfromjason.writeas.com/tag:Design" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Design</span></a> <a href="https://notesfromjason.writeas.com/tag:Siri" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Siri</span></a> <a href="https://notesfromjason.writeas.com/tag:Technology" class="hashtag" rel="nofollow"><span>#</span><span class="p-category">Technology</span></a>
</p>

<p>I have a thousand purchased songs and 250 movies on iTunes. My AirPods are an extension of me during work hours (and after work if I&#39;m honest).</p>

<p>Leaving Apple for another ecosystem would be a messy divorce in which I would lose everything. Yet, I yearn. I long. I flirt. Anytime I shout, “Hey, Siri,” I wonder what could be if my virtual assistant wasn&#39;t a fucking idiot.</p>

<p>It didn&#39;t take long for Chat GPT to become an everyday tool. I use it to flush out concepts, for technical writing, and javascript coding. Hell, last night, I worked with the AI to create an <a href="https://music.apple.com/us/playlist/find-your-focus/pl.u-4lRGTam0WqY" rel="nofollow">ADHD playlist</a> to help me find my focus when I work. It was a huge success.</p>

<p>Today, Open AI keeps Chat GPT in a chatbot format. Unless you&#39;re Microsoft or one of Open AI&#39;s “friends,” you can&#39;t do much more with Chat GPT than text back and forth. The same goes for Google&#39;s Bard (or whatever they&#39;re doing now). This format will soon evolve, and AI will power our virtual voice assistants. Cortana and Google Assistant are poised to take a giant leap forward and be for us everything we hoped Siri would achieve by now.</p>

<p>Siri is shit. Even by non-AI standards, Siri fucking sucks. Half the time, it doesn&#39;t respond. And when it does, it&#39;s a coin toss on whether it will actually help me with what I need. Siri is slow, unhelpful, and dumb; I can&#39;t take it anymore. I want Siri to answer my questions without sending me to the web. I want to write an article as I walk down the street and know what I&#39;m dictating is making it to the page. I want Siri to understand the context, remember conversations, and know when to reference them.</p>

<p>I know what you&#39;re thinking— Jason is like that guy in the movie Her. He&#39;s going to fall in love with his virtual assistant. Let me tell you something. I am ready to break my heart if that&#39;s what it takes to get the virtual assistant we were all promised.</p>

<p>I&#39;ll jump ship if Microsoft or Google can put a powerful, conversational AI in my ear before Apple. I&#39;ll be the green text bubble in everyone&#39;s group chat. Do you hear me, Tim Cook? I&#39;ll fucking do it, bro. So help me, God.</p>

<p>I don&#39;t want three thousand dollar virtual reality goggles. I don&#39;t want new spring colors for our iPhone cases (I do want that). I want you to fix Siri. Do it now. This is embarrassing. I hate it. Get your shit together.</p>

<hr/>

<p>Created: April 5, 2023
Last Evolved: —</p>
]]></content:encoded>
      <guid>https://notesfromjason.writeas.com/if-apple-doesnt-fix-siri-soon-im-out</guid>
      <pubDate>Fri, 22 Sep 2023 21:57:54 +0000</pubDate>
    </item>
  </channel>
</rss>