Thursday, April 18, 2024

Sorry for the short notice but I just found out myself. Privacy Foundation Seminar:

Artificial Intelligence and the Practice of Law

Friday April 19th 11:30 – 1:00

1 ethics CLE credit. Contact Kristen Dermyer 303-871-6487 <Kristen.Dermyer@du.edu> to register.





...and not just for lawyers.

https://www.bespacific.com/the-smartest-way-to-use-ai-at-work/

The Smartest Way to Use AI at Work

WSJ via MSN: “By by day, there’s growing pressure at the office. Do you respond to all those clients—or let AI do it? Do you attend that meeting—or do you send a bot? About 20% of employed adults said they have used OpenAI’s ChatGPT for work as of February 2024, up from 8% a year ago, according to Pew Research Center. The most popular uses for AI at work are research and brainstorming, writing first-draft emails and creating visuals and presentations, according to an Adobe survey. Productivity boosts from AI are estimated to be worth trillions of dollars over the next decade, say consultants. Many companies are encouraging their workers to embrace and learn the new tools. The industries that will benefit most are sales and marketing, customer care, software engineering and product development. For most workers, it can make your day-to-day a bit less annoying. “If you’re going to use it as a work tool,” said Lareina Yee, a senior partner at the consulting firm McKinsey and chair of its Technology Council, “you need to think of all the ways it can change your own productivity equation.” Using AI at work could get you fired—or at least in hot water. A judge last year sanctioned a lawyer who relied on fake cases generated by ChatGPT, and some companies have restricted AI’s usage. Other companies and bosses are pushing staff to do more with AI, but you’ll need to follow guidelines. Rule No. 1: Don’t put any company data into a tool without permission. And Rule No. 2: Only use AI to do work you can easily verify, and be sure to check its work…” Uses include: Email; Presentations; Summaries; Meetings.





Too many tools, too little time.

https://www.makeuseof.com/custom-gpts-that-make-chat-gpt-better/

10 Custom GPTs That Actually Make ChatGPT Better

ChatGPT on its own is great, but did you know that you can use custom GPTs to streamline its functionality? Custom GPTs can teach you how to code, plan trips, transcribe videos, and much, much more, and there are heaps for you to choose from.

So, here are the best custom GPTs that actually make ChatGPT a better tool for any situation.





Not sure I believe these numbers…

https://www.edweek.org/technology/see-which-types-of-teachers-are-the-early-adopters-of-ai/2024/04

See Which Types of Teachers Are the Early Adopters of AI

Among social studies and English/language arts teachers, the number of AI users was higher than the general teaching population. Twenty-seven percent of English teachers and social studies teachers use AI tools in their work. By comparison, 19 percent of teachers in STEM disciplines said they use AI, and 11 percent of elementary education teachers reported doing so.



Wednesday, April 17, 2024

I thought this sounded familiar…

https://sloanreview.mit.edu/article/ai-and-statistics-perfect-together/

AI and Statistics: Perfect Together

People are often unsure why artificial intelligence and machine learning algorithms work. More importantly, people can’t always anticipate when they won’t work. Ali Rahimi, an AI researcher at Google, received a standing ovation at a 2017 conference when he referred to much of what is done in AI as “alchemy,” meaning that developers don’t have solid grounds for predicting which algorithms will work and which won’t, or for choosing one AI architecture over another. To put it succinctly, AI lacks a basis for inference: a solid foundation on which to base predictions and decisions.

This makes AI decisions tough (or impossible) to explain and hurts trust in AI models and technologies — trust that is necessary for AI to reach its potential. As noted by Rahimi, this is an unsolved problem in AI and machine learning that keeps tech and business leaders up at night because it dooms many AI models to fail in deployment.

Fortunately, help for AI teams and projects is available from an unlikely source: classical statistics. This article will explore how business leaders can apply statistical methods and statistics experts to address the problem.





Clogging congress. (Or any organization that would take this seriously.)

https://www.schneier.com/blog/archives/2024/04/using-ai-generated-legislative-amendments-as-a-delaying-technique.html

Using AI-Generated Legislative Amendments as a Delaying Technique

Canadian legislators proposed 19,600 amendments —almost certainly AI-generated—to a bill in an attempt to delay its adoption.





Resource.

https://www.bespacific.com/free-guide-learn-how-to-use-chatgpt/

Free guide – Learn how to use ChatGPT

Ben’s Bites – Learn how to use ChatGPT. An introductory overview of ChatGPT, the AI assistant by OpenAI Designed for absolute beginners, this short course explores in simple terms how AI assistant ChatGPT works and how to get started using it.





Tools & Techniques. Could this be trained for other topics?

https://news.yale.edu/2024/04/16/student-developed-ai-chatbot-opens-yale-philosophers-works-all

Student-developed AI chatbot opens Yale philosopher’s works to all

LuFlot Bot, a generative AI chatbot trained on the works of Yale philosopher Luciano Floridi, answers questions on the ethics of digital technology.

Visit this link to converse with LuFlot about the ethics of digital technologies.



Tuesday, April 16, 2024

Is this the first step on the slippery slope to home defense drones armed with napalm and machine guns? (I can see where it would be very satisfying to paint ball a porch pirate!)

https://boingboing.net/2024/04/15/this-armed-security-camera-uses-ai-to-fire-paintballs-or-tear-gas-at-trespassers.html

This armed security camera uses AI to fire paintballs or tear gas at trespassers

PaintCam is an armed home/office security camera that uses AI to spot trespassers and fires paintballs or tear gas projectiles at them. The company's promotional video looks like a parody but apparently this "vigilant guardian that doesn't sleep, blink, or miss a beat" is a real product.

According to New Atlas, the system "uses automatic target marking, face recognition and AI-based decision making to identify unfamiliar visitors to your property, day or night.





When the demand for information is huge, providing anything must be profitable.

https://www.wired.com/story/iran-israel-attack-viral-fake-content/

Fake Footage of Iran’s Attack on Israel Is Going Viral

IN THE HOURS after Iran announced its drone and missile attack on Israel on April 13, fake and misleading posts went viral almost immediately on X. The Institute for Strategic Dialogue (ISD), a nonprofit think tank, found a number of posts that claimed to reveal the strikes and their impact, but that instead used AI-generated videos, photos, and repurposed footage from other conflicts which showed rockets launching into the night, explosions, and even President Joe Biden in military fatigues.

Just 34 of these misleading posts received more than 37 million views, according to ISD. Many of the accounts posting the misinformation were also verified, meaning they have paid X $8 per month for the “blue tick” and that their content is amplified by the platform’s algorithm. ISD also found that several of the accounts claimed to be open source intelligence (OSINT) experts, which has, in recent years, become another way of lending legitimacy to their posts.





I’m trying to get and stay current…

https://aiindex.stanford.edu/report/

Measuring trends in AI

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.

DOWNLOAD THE FULL REPORT

DOWNLOAD INDIVIDUAL CHAPTERS





Tools & Techniques.

https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

NSA Publishes Guidance for Strengthening AI System Security

The National Security Agency (NSA) is releasing a Cybersecurity Information Sheet (CSI) today, Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” The CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.



Sunday, April 14, 2024

The evolution of computer crime.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4788909

Hacking Generative AI

Generative AI platforms, like ChatGPT, hold great promise in enhancing human creativity, productivity, and efficiency. However, generative AI platforms are prone to manipulation. Specifically, they are susceptible to a new type of attack called “prompt injection.” In prompt injection, attackers carefully craft their input prompt to manipulate AI into generating harmful, dangerous, or illegal content as output Examples of such outputs include instructions on how to build an improvised bomb, how to make meth, how to hotwire a car, and more. Researchers have also been able to make ChatGPT generate malicious code.

This article asks a basic question: do prompt injection attacks violate computer crime law, mainly the Computer Fraud and Abuse Act? This article argues that they do. Prompt injection attacks lead AI to disregard its own hard-coded content generation restrictions, which allows the attacker to access portions of the AI that are beyond what the system’s developers authorized. Therefore, this constitutes the criminal offense of accessing a computer in excess of authorization. Although prompt injection attacks could run afoul of the Computer Fraud and Abuse Act, this article offers ways to distinguish serious acts of AI manipulation from less serious ones, so that prosecution would only focus on a limited set of harmful and dangerous prompt injections.





Perspective.

https://www.ft.com/content/cde75f58-20b9-460c-89fb-e64fe06e24b9

ChatGPT essay cheats are a menace to us all

The other day I met a British academic who said something about artificial intelligence that made my jaw drop.

The number of students using AI tools like ChatGPT to write their papers was a much bigger problem than the public was being told, this person said.

AI cheating at their institution was now so rife that large numbers of students had been expelled for academic misconduct — to the point that some courses had lost most of a year’s intake. “I’ve heard similar figures from a few universities,” the academic told me.

Spotting suspicious essays could be easy, because when students were asked why they had included certain terms or data sources not mentioned on the course, they were baffled. “They have clearly never even heard of some of the terms that turn up in their essays.”



Saturday, April 13, 2024

Perspective.

https://abovethelaw.com/2024/04/artificial-intelligence-may-not-disrupt-the-legal-profession-for-a-while/

Artificial Intelligence May Not Disrupt The Legal Profession For A While

The work of artificial intelligence definitely still needs to be reviewed by a lawyer.

Ever since ChatGPT roared onto the scene over a year ago, everyone has been talking about how the world will change due to advances in artificial intelligence. Many commentators have singled out the legal industry as a sector that will be particularly impacted by artificial intelligence, since much of the rote work performed by associates can presumably be handled by artificial intelligence in the coming years. Initially, I also believed that artificial intelligence would have a huge impact on the legal profession in the short term, but it now seems that the legal profession will not be materially affected for at least several years, if not longer.



Friday, April 12, 2024

I might find this useful.

https://www.zdnet.com/article/google-and-mit-launch-a-free-generative-ai-course-for-teachers/

Google and MIT launch a free generative AI course for teachers

When considering generative AI in the classroom, many people think of its potential for students; however, teachers can benefit just as much from the technology, if not more. On Thursday, Google and MIT Responsible AI for Social Empowerment and Education (RAISE) unveiled a free Google Generative AI Educators course to help middle and high school teachers use generative AI tools to enhance their workflow and students' classroom experience.

The self-paced, two-hour course instructs teachers how to use generative AI to save time in everyday tasks such as writing emails, modifying content for different reading levels, building creative assessments, structuring activities to students' interests, and more, according to the press release. Teachers can even learn how to use generative AI to help with one of the most time-consuming tasks – lesson planning – by inputting their existing lesson plan into the generative AI models to get ideas on what to do next in the classroom.

https://skillshop.exceedlms.com/student/path/1176018



Thursday, April 11, 2024

Instagram will be looking at every image you send or receive, then modify (blur) the image and finally provide legal advice where it seems appropriate?

https://www.wsj.com/tech/personal-tech/instagram-to-start-blurring-nude-images-in-messages-to-protect-teens-38f8d9c6?st=przebf72a9696mg&reflink=desktopwebshare_permalink

Instagram to Start Blurring Nude Images in Messages to Protect Teens

Instagram is now taking a meaningful step to contain the problem, by automatically detecting and blurring nudes in its direct-messaging service.

Instagram users who receive nude images via direct messages will see a pop-up explaining how to block the sender or report the chat, and a note encouraging the recipient not to feel pressure to respond. People who attempt to send a nude via direct messages will be advised to be cautious and receive a reminder that they can unsend a pic.

If teens receive a nude image on Instagram, the picture will be blurred and they will see a message steering them to safety tips.

The new feature—to be tested in the coming weeks and expected to roll out globally over the next few months—will be on by default for accounts with birth dates corresponding to teenagers, said Instagram’s parent, Meta Platforms. Teens can disable it if they want. Adult accounts will be encouraged to enable the feature.





Narrowing the definition?

https://www.bespacific.com/uspto-ai-guidance-highlights-risks-for-practitioners-and-public/

USPTO AI Guidance Highlights Risks for Practitioners and Public

IP Watchdog: “The U.S. Patent and Trademark Office (USPTO) today announced guidance for practitioners and the public regarding the use of artificial intelligence (AI) in the preparation of filings for submission to the Office. The guidance comes two months after the Office issued a guidance memorandum for the Trademark and Patent Trial and Appeal Boards (TTAB and PTAB) on the misuse of AI tools before the Boards that clarified the application of existing rules to AI submissions. That guidance was in part prompted by Supreme Court Chief Justice John Roberts’ 2023 year-end report, which acknowledged both the benefits and dangers of AI in the context of the legal profession. It also noted President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, which directed the USPTO Director to issue recommendations to the President, in consultation with the Director of the Copyright Office, on potential executive actions to be taken relating to copyright and AI. Today’s draft Federal Register Notice builds upon the February guidance and is aimed at reminding professionals, innovators, and entrepreneurs of the existing USPTO rules that protect against the potential “perils” of AI. These include the Duty of Candor and Good Faith; the Signature Requirement; Confidentiality of Information; Foreign Filing Licenses and Export Regulations; existing electronic systems’ policies; and duties owed to clients…”





This could get a bit complicated if I’m scanning the web. What percentage of websites clearly label the copyright owner of each article?

https://thehill.com/homenews/house/4583318-schiff-unveils-ai-training-transparency-measure/

Schiff unveils AI training transparency measure

Rep. Adam Schiff (D-Calif.) unveiled legislation on Tuesday that would require companies using copyrighted material to train their generative artificial intelligence models to publicly disclose all of the work that they used to do so.

The bill, called the “Generative AI Copyright Disclosure Act,” would require people creating training datasets – or making any significant changes to a dataset – to submit a notice to the Register of Copyrights with a “detailed summary of any copyrighted works used” and the URL for any publicly available material.

… The Register of Copyrights would then publish an online database available to the public with all the notices.