ri

Your Pet Tributes'Sassy'

To my beautiful little girl Sassy, I hope wherever you are you are happy and well. I look at your photos that are all around the house and wish you were




ri

Your Pet Loss Diaries'Lisa & Diana'My Beautiful DianaNov 17, 2013

Hi my baby girl, How are you? Are you playing and having a good time? Are you staying close to Rufus? I hope you're happy and have all kinds of new friends




ri

Your Pet Loss Diaries'Lisa & Rufus'My Beloved RufusNov 17, 2013

Hi my big guy, How are you are you having fun? Are you playing and have you made new friends? Are you keeping an eye on Diana? I hope you are happy and




ri

strip for April / 17 / 2020 - Attorney-at-Law




ri

strip for April / 22 / 2020 - Like and Subscribe




ri

strip for April / 24 / 2020 - Yes and...




ri

strip for April / 27 / 2020 - Quarks




ri

Original Art Up for Grabs!

Friends! I put a fun piece of original art up on eBay, starting at one penny: Go snag it! --> https://www.ebay.com/itm/293555003346




ri

strip for April / 30 / 2020 - Thunderdome




ri

strip for May / 1 / 2020 - Megatron




ri

strip for May / 4 / 2020 - Two Strategies




ri

strip for May / 6 / 2020 - Family Secrets




ri

origin of love

The Origin of Love Hedwig and the Angry Inch When the earth was still flat and the clouds made of fire and the mountains stretched up to the sky, sometimes higher folks roamed the earth like big rolling kegs they had two sets of arms they had two sets of legs they had two faces […]




ri

My CNN editorial, how it all came to be

  So I wrote an op-ed about the recent Macmillan/ebooks kerfuffle for CNN. Here’s how that all worked…. I got...




ri

Ask A Librarian: VPNs?

  From a Vermont librarian: VPNs are really important and I’d like to remind our patrons about them, but it...




ri

Ask A Librarian: Hard Drive Cleanup for Macs?

  I am looking for someone who can help me find and clear out excess data on one of my...




ri

Ask A Librarian: What About Controlled Digital Lending?

From a friend: Please explain to me your enthusiasm for controlled digital lending. Please let me know what you think...




ri

Ask a Librarian: Older person wanting to learn about tech

Subtitled: What’s the Yahoo! Internet Life for this generation? From a friend: A nice older lady asked for advice on...




ri

Ask A Librarian: Graphic Novels for Boomers?

I was wondering if you might give my little women’s (boomers) some guidance as to a beginning graphic novel for...




ri

2019 in Libraries

  Visiting libraries is great. Neat things to learn about communities, comfy places to sit, clean bathrooms. I went to...




ri

Ask A Librarian: What is the deal with “free” ebook sites?

It’s been an odd set of months. I got busy with Drop-In Time and then very un-busy. I’ve been keeping...




ri

Is It A Crime to Stop the Economy?

[I am happy to turn this space over to my former colleague and (I trust) lifelong friend Romans Pancs, who offers what he describes as a polemical essay. It has no references and no confidence intervals. It has question marks. It makes a narrow point and does not weigh pros and cons. It is an […]




ri

Gorillaz посвятили песню погибшему Тони Аллену

Группа Gorillaz представила песню How Far?, которая была написана вместе с барабанщиком Тони Алленом и посвящена ему.




ri

Switching phubb's HTTP client - Christian Weiske

phubb is a WebSub hub that notifies subscribers in realtime when your website is updated.

Up to this year, phubb sent HTTP requests (GET + POST) with file_get_contents() and a HTTP stream context - see my previous example.

But then I needed a 100% correct way of detecting a page's Hub URL, and copied the code from phinde, my blog search engine. With that I introduced a dependency to PEAR's good old HTTP_Request2 library and I decided to use that library for all requests.

Unfortunately, now the problems began: During development I got an error in about one of 10-20 requests on my machine and could not find the cause:

PHP Fatal error:  Uncaught HTTP_Request2_MessageException: Malformed response:  in HTTP/Request2/Adapter/Socket.php on line 1019

#0 HTTP/Request2/Adapter/Socket.php(1019): HTTP_Request2_Response->__construct('', true, Object(Net_URL2))
#1 HTTP/Request2/Adapter/Socket.php(136): HTTP_Request2_Adapter_Socket->readResponse()
#2 HTTP/Request2.php(946): HTTP_Request2_Adapter_Socket->sendRequest(Object(phubbHttpRequest))
#3 phubb/src/phubb/HttpRequest.php(22): HTTP_Request2->send()
#4 phubb/src/phubb/Task/Publish.php(283): phubbHttpRequest->send()
#5 phubb/src/phubb/Task/Publish.php(248): phubbTask_Publish->fetchTopic(Object(phubbModel_Topic))
#6 phubb/src/phubb/Task/Publish.php(77): phubbTask_Publish->checkTopicUpdate('http://push-tes...')
#7  in HTTP/Request2/Response.php on line 215

The socket adapter has this problem, and I did not want to try to debug that strange problem. (No idea if the cURL one has it; I do not want to rely on php-curl). Finding a new HTTP library was the only option.

New HTTP library

The PHP Framework Interop Group has several HTTP-related proposals; one of them PSR-18: HTTP Client. Now that we have a standardized way to send HTTP requests in 2020, I should use a library that implements it.

The psr-18 topic on Github listed some clients:

Symfony's HTTP client was among them, and it provides a mock client for unit tests! Unfortunately, it also introduces a million dependencies.

There were two others that looked ok-ish on first sight (diciotto and http-client-curl) but both of them had no mock client, and the latter was even curl only. Again nothing for me.

Then I found PHP-HTTP that promises a standard interface for HTTP clients in PHP, and it supports PSR-18! It even has a socket client that has nearly no dependencies, and a mock client for unit tests. I'll try that one for now.




ri

PHP Internals News: Episode 50: The RFC Process - Derick Rethans

PHP Internals News: Episode 50: The RFC Process

In this episode of "PHP Internals News", Henrik Gemal (LinkedIn, Website) asks me about how PHP's RFC process works, and I try to answer all of his questions.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 50. Today I'm talking with Henrik come out after he reached out with a question. You might know that at the end of every podcast, I ask: if you have any questions, feel free to email me. And Henrik was the first person to actually do so within a year and a half's time. For the fun, I'm thinking that instead of I'm asking the questions, I'm letting Henrik ask the questions today, because he suggested that we should do a podcast about how the RFC process actually works. Henrik, would you please introduce yourself?

Henrik Gemal 0:52

Yeah, my name is Henrik Gemal. I live in Denmark. The CTO of dinner booking which does reservation systems for restaurants. I've been doing a PHP development for more than 10 years. But I'm not coding so much now. Now I'm managing a big team of PHP developers. And I also been involved in the the open source development of Mozilla Firefox.

Derick Rethans 1:19

So usually I prepare the questions, but in this case, Henrik has prepared the questions. So I'll hand over to him to get started with them. And I'll try to do my best to answer the questions.

Henrik Gemal 1:27

I heard a lot about these RFCs. And I was interested in the process of it. So I'm just starting right off here, who can actually do an RFC? Is it anybody on the internet?

Derick Rethans 1:38

Yeah, pretty much. In order to be able to do an RFC, what you would need is you need to have an idea. And then you need access to our wiki system to be able to actually start writing that, well not to write them, to publish it. The RFC process is open for everybody. In the last year and a half or so, some of the podcasts that I've done have been with people that have been contributing to PHP for a long time. But in other cases, it's people like yourself that have an idea, come up, work together with somebody to work on a patch, and then create an RFC out of that. And that's then goes through the whole process. And sometimes they get accepted, and sometimes they don't.

Henrik Gemal 2:16

How technical are the RFCs? Is it like coding? Or is it more like the idea in general?

Derick Rethans 2:23

The idea needs to be there, it needs to be thought out. It needs to have a good reason for why we want to add or change something in PHP. The motivation is almost as important as what the change or addition actually is about. Now, that doesn't always get us here at variable. In my opinion, but that is an important thing. Now with the idea we need to talk about what changes it has on the rest of the ecosystem, whether they are backward compatible breaks in there, how it effects extensions, or sometimes how it effects OPCache. Sometimes considerations have to be taken for that because it's, it's something quite important in the PHP ecosystem. And it is recommended that it comes with a patch, because it's often a lot easier to talk about an implementation than to talk about the idea. But that is not a necessity. There have been quite some RFCs where the idea was there. But it wasn't a patch right away yet. It is less likely that these RFCs will g

Truncated by Planet PHP, read more at the original (another 15224 bytes)




ri

PHP Internals News: Episode 51: Object Ergonomics - Derick Rethans

PHP Internals News: Episode 51: Object Ergonomics

In this episode of "PHP Internals News" I talk with Larry Garfield (Twitter, Website, GitHub) about a blog post that he was written related to PHP's Object Ergonomics.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 51. Today I'm talking with Larry Garfield, not about an RFC for once, but about a blog post that he's written called Object Ergonomics. Larry, would you please introduce yourself?

Larry Garfield 0:38

Hello World. My name is Larry Garfield, also Crell, CRELL, on various social medias. I work at platform.sh in developer relations. We're a continuous deployment cloud hosting company. I've been writing PHP for 21 years and been a active gadfly and nudge for at least 15 of those.

Derick Rethans 1:01

In the last couple of months, we have seen quite a lot of smaller RFCs about all kinds of little features here and there, to do with making the object oriented model of PHP a little bit better. I reckon this is also the nudge behind you writing a slightly longer blog post titled "Improving PHP object ergonomics".

Larry Garfield 1:26

If by slightly longer you mean 14 pages? Yes.

Derick Rethans 1:29

Yes, exactly. Yeah, it took me a while to read through. What made you write this document?

Larry Garfield 1:34

As you said, there's been a lot of discussion around improving PHP's general user experience of working with objects in PHP. Where there's definitely room for improvement, no question. And I found a lot of these to be useful in their own right, but also very narrow and narrow in ways that solve the immediate problem but could get in the way of solving larger problems later on down the line. So I went into this with an attitude of: Okay, we can kind of piecemeal and attack certain parts of the problem space. Or we can take a step back and look at the big picture and say: Alright, here's all the pain points we have. What can we do that would solve not just this one pain point. But let us solve multiple pain points with a single change? Or these two changes together solve this other pain point as well. Or, you know, how can we do this in a way that is not going to interfere with later development that we've talked about. We know we want to do, but isn't been done yet. So how do we not paint ourselves into a corner by thinking too narrow?

Derick Rethans 2:41

It's a curious thing, because a more narrow RFC is likely easier to get accepted, because it doesn't pull in a whole set of other problems as well. But of course, as you say, if the whole idea hasn't been thought through, then some of these things might not actually end up being beneficial. Because it can be combined with some other things to directly address the problems that we're trying to solve, right?

Larry Garfield 3:07

Yeah, it comes down to what are the smallest changes we can make that taken together have the largest impact. That kind of broad picture thinking is something that is hard to do in PHP, just given the way it's structured. So I took a stab at that.

Derick Rethans 3:21

What are the main problems that we should address?

Larry Garf

Truncated by Planet PHP, read more at the original (another 29525 bytes)




ri

Xdebug Update: April 2020 - Derick Rethans

Xdebug Update: April 2020

Another monthly update where I explain what happened with Xdebug development in this past month. These will be published on the first Tuesday after the 5th of each month. Patreon supporters will get it earlier, on the first of each month. You can become a patron to support my work on Xdebug. If you are leading a team or company, then it is also possible to support Xdebug through a subscription.

In March, I worked on Xdebug for about 60 hours, on the following things:

Xdebug 2.9.5

The 2.9.5 release addresses a few bugs. One of them was a follow on from the issue where Xdebug would crash when another extension would run code in PHP's Request Init stage, but only on a second or later request in the same PHP process. As this is not something that's easy to catch with PHP's testing framework that Xdebug uses, this issue slipped through the cracks.

The release fixes another bug, where throwing an exception from within a destructor would crash Xdebug. The fix for this was merely making sure that PHP's internal state is still available:

- if (!(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {
+ if (EG(current_execute_data) && !(ZEND_CALL_INFO(EG(current_execute_data)) & ZEND_CALL_HAS_SYMBOL_TABLE)) {

Beyond these two crashes, the release also addressed an issue where Xdebug did not always correct catch where executable code could exist for code coverage analyses. Over the last decade, PHP has been getting more and more optimised, with more internal engine instructions. Unfortunately that sometimes means that these are not hooked into by Xdebug, to see whether there could be a line of code that would make use of these opcodes. As this is often very dependent on how developers lay out their code, these issues are often found by them. Luckily, these issues are trivially fixed, as long as I have access to just the file containing that code. I then analyse it with vld to see which opcode (PHP engine instruction) I have missed.

Xdebug 3 and Xdebug Cloud

Most of my time was spend on getting Xdebug Cloud to a state where I can invite select developers to alpha test it. This includes allowing for Xdebug to connect to Xdebug Cloud. There is currently a branch available, but it still lacks the addition of SSL encryption, which is a requirement for allowing safe transport of debug information.

The communications between an IDE and Xdebug through Xdebug Cloud is working, with a few things related to detecting disconnections more reliably still outstanding.

As Xdebug Cloud needs integration in debugging clients (such as PhpStorm, and other IDEs), I have been extending the dbgpProxy tool to act as intermediate link between existing IDEs and Xdebug Cloud without IDEs having to change anything. This work is still ongoing, and is not documented yet, but I hope to finish that in the next week. Once that and SSL support in the Xdebug to Xdebug Cloud communication has been finalized, I will reach out to subscribers of the Xdebug Cloud newsletter to see if anybody is interested in trying it out.

Podcast

The PHP Internals News continues its second season. Episodes in the last month included a discussion on PHP 8's JIT engine and increasing complexity,

Truncated by Planet PHP, read more at the original (another 720 bytes)





ri

PHP Internals News: Episode 52: Floats and Locales - Derick Rethans

PHP Internals News: Episode 52: Floats and Locales

In this episode of "PHP Internals News" I talk with George Banyard (Website, Twitter, GitHub, GitLab) about an RFC that he has proposed together with Máté Kocsis (Twitter, GitHub, LinkedIn) to make PHP's float to string logic no longer use locales.

The RSS feed for this podcast is https://derickrethans.nl/feed-phpinternalsnews.xml, you can download this episode's MP3 file, and it's available on Spotify and iTunes. There is a dedicated website: https://phpinternals.news

Transcript

Derick Rethans 0:16

Hi, I'm Derick. And this is PHP internals news, a weekly podcast dedicated to demystifying the development of the PHP language. This is Episode 52. Today I'm talking with George Banyard about an RFC that he's made together with Mate Kocsis. This RFC is titled locale independent floats to string. Hello, George, would you please introduce yourself?

George Banyard 0:39

Hello, I'm George Peter Banyard. I'm a student at Imperial College and I work on PHP in my free time.

Derick Rethans 0:47

All right, so we're talking about local independent floats. What is the problem here?

George Banyard 0:52

Currently when you do a float to string conversion, so all casting or displaying a float, the conversion will depend on like the current local. So instead of always using like the decimal dot separator. For example, if you have like a German or the French locale enabled, it will use like a comma to separate like the decimals.

Derick Rethans 1:14

Okay, I can understand that that could be a bit confusing. What are these locales exactly?

George Banyard 1:20

So locales, which are more or less C locales, which PHP exposes to user land is a way how to change a bunch of rules on how string and like stuff gets displayed on the C level. One of the issues with it is that like it's global. For example, if you use like a thread safe API, if you use the thread safe PHP version, then set_locale() is not thread safe, so we'll just like impact other threads where you're using it.

Derick Rethans 1:50

So a locale is a set of rules to format specific things with floating point numbers being one of them in which situations does the locale influence the display a floating point numbers in every situation in PHP or only in some?

George Banyard 2:06

Yes, it only impacts like certain aspects, which is quite surprising. So a string cast will affect it the strval() function, vardump(), and debug_zval_dump() will all affect the decimal locator and also printf() with the percentage lowercase F, but that's expected because it's locale aware compared to the capital F modifier.

Derick Rethans 2:32

But it doesn't, for example, have the same problem in the serialised function or say var_export().

George Banyard 2:37

Yeah, and json_encode() also doesn't do that. PDO has special code which handles also this so that like all the PDO drivers get like a constant treat like float string, because that could like impact on the databases.

Derick Rethans 2:53

How is it a problem that with some locales enabled and then uses a comma instead of the decimal point. How can this cause bugs and PHP applications?

Truncated by Planet PHP, read more at the original (another 17468 bytes)




ri

'Job Creating' Sprint T-Mobile Merger Triggers Estimated 6,000 Non-Covid Layoffs

Back when T-Mobile and Sprint were trying to gain regulatory approval for their $26 billion merger, executives repeatedly promised the deal would create jobs. Not just a few jobs, but oodles of jobs. Despite the fact that US telecom history indicates such deals almost always trigger mass layoffs, the media dutifully repeated T-Mobile and Sprint executive claims that the deal would create "more than 3,500 additional full-time U.S. employees in the first year and 11,000 more people by 2024."

About that.

Before the ink on the deal was even dry, T-Mobile began shutting down its Metro prepaid business and laying off impacted employees. When asked about the conflicting promises, T-Mobile refused to respond to press inquiries. Now that shutdown has accelerated, with estimates that roughly 6,000 employees at the T-Mobile subsidiary have been laid off as the freshly-merged company closes unwanted prepaid retailers. T-Mobile says the move, which has nothing to do with COVID-19, is just them "optimizing their retail footprint." Industry insiders aren't amused:

"Peter Adderton, the founder of Boost Mobile in Australia and in the U.S. who has been a vocal advocate for the Boost brand and for dealers since the merger was first proposed, figures the latest closures affect about 6,000 people. He cited one dealer who said he has to close 95 stores, some as early as May 1.

In their arguments leading up to the merger finally getting approved, executives at both T-Mobile and Sprint argued that it would not lead to the kind of job losses that many opponents were predicting. They pledged to create jobs, not cut them.

“The whole thing is exactly how we called it, and no one is calling them out. It’s so disingenuous,” Adderton told Fierce, adding that it’s not because of COVID-19. Many retailers in other industries are closing stores during the crisis but plan to reopen once it’s safe to do so."

None of this should be a surprise to anybody. Everybody from unions to Wall Street stock jocks had predicted the deal would trigger anywhere between 15,000 and 30,000 layoffs over time as redundant support, retail, and middle management positions were eliminated. It's what always happens in major US telecom mergers. There is 40 years of very clear, hard data speaking to this point. Yet in a blog post last year (likely to be deleted by this time next year), T-Mobile CEO John Legere not only insisted layoffs would never happen, he effectively accused unions, experts, consumer groups, and a long line of economists of lying:

"This merger is all about creating new, high-quality, high-paying jobs, and the New T-Mobile will be jobs-positive from Day One and every day thereafter. That’s not just a promise. That’s not just a commitment. It’s a fact....These combined efforts will create nearly 5,600 new American customer care jobs by 2021. And New T-Mobile will employ 7,500+ more care professionals by 2024 than the standalone companies would have."

That was never going to happen. Less competition and revolving door, captured regulators and a broken court system means there's less than zero incentive for T-Mobile to do much of anything the company promised while it was wooing regulators. And of course such employment growth is even less likely to happen under a pandemic, which will provide "wonderful" cover for cuts that were going to happen anyway.

Having watched more telecom megadeals like this than I can count, what usually happens is the companies leave things generally alone for about a year to keep employees calm and make it seem like deal critics were being hyperbolic. Then, once the press and public is no longer paying attention (which never takes long), the hatchets come out and the downsizing begins. When the layoffs and reduced competition inevitably arrives, they're either ignored or blamed on something else. In this case, inevitably, COVID-19.

In a few years, the regulators who approved the deal will have moved on to think tank, legal or lobbying positions at the same companies they "regulated." The same press that over-hyped pre-merger promises won't follow back up, because there's no money in that kind of hindsight policy reporting or consumer advocacy. And executives like John Legere (who just quit T-Mobile after selling his $17.5 million NYC penthouse to Giorgio Armani) are dutifully rewarded, with the real world market and human cost of mindless merger mania quickly and intentionally forgotten.




ri

Hedge Fund 'Asshole' Destroying Local News & Firing Reporters Wants Google & Facebook To Just Hand Him More Money

Have you heard of Heath Freeman? He's a thirty-something hedge fund boss, who runs "Alden Global Capital," which owns a company misleadingly called "Digital First Media." His business has been to buy up local newspapers around the country and basically cut everything down to the bone, and just milk the assets for whatever cash they still produce, minus all the important journalism stuff. He's been called "the hedge fund asshole", "the hedge fund vampire that bleeds newspapers dry", "a small worthless footnote", the "Gordon Gecko" of newspapers and a variety of other fun things.

Reading through some of those links above, you find a standard playbook for Freeman's managing of newspapers:

These are the assholes who a few years ago bought the Denver Post, once one of the best regional newspapers in the country, and hollowed it out into a shell of its former self, then laid off some more people. Things got so bad that the Post’s own editorial board rebelled, demanding that if “Alden isn’t willing to do good journalism here, it should sell the Post to owners who will.”

And here's one of the other links from above telling a similar story:

The Denver newsroom was hardly alone in its misery. In Northern California, a combined editorial staff of 16 regional newspapers had reportedly been slashed from 1,000 to a mere 150. Farther down the coast in Orange County, there were according to industry analyst Ken Doctor, complained of rats, mildew, fallen ceilings, and filthy bathrooms. In her Washington Post column, media critic Margaret Sullivan called Alden “one of the most ruthless of the corporate strip-miners seemingly intent on destroying local journalism.”

And, yes, I think it's fair to say that many newspapers did get a bit fat and happy with their old school monopolistic hold on the news market pre-internet. And many of them failed to adapt. And so, restructuring and re-prioritizing is not a bad idea. But that's not really what's happening here. Alden appears to be taking profitable (not just struggling) newspapers, and squeezing as much money out of them directly into Freeman's pockets, rather than plowing it back into actual journalism. And Alden/DFM appears to be ridiculously profitable for Freeman, even as the journalism it produces becomes weaker and weaker. Jim Brady called it "combover journalism." Basically using skeleton staff to pretend to really be covering the news, when it's clear to everyone that it's not really doing the job.

All of that is prelude to the latest news that Freeman, who basically refuses to ever talk to the media, has sent a letter to other newspaper bosses suggesting they collude to force Google and Facebook to make him even richer.

You can see the full letter here:


Let's go through this nonsense bit by bit, because it is almost 100% nonsense.

These are immensely challenging times for all of us in the newspaper industry as we balance the two equally important goals of keeping the communities we serve fully informed, while also striving to safeguard the viability of our news organizations today and well into the future.

Let's be clear: the "viability" of your newsrooms was decimated when you fired a huge percentage of the local reporters and stuffed the profits into your pockets, rather than investing in the actual product.

Since Facebook was founded in 2004, nearly 2,000 (one in five) newspapers have closed and with them many thousands of newspaper jobs have been lost. In that same time period, Google has become the world's primary news aggregation service, Apple launched a news app with a subsription-based tier and Twitter has become a household name by serving as a distribution service for the content our staffs create.

Correlation is not causation, of course. But even if that were the case, the focus of a well-managed business would be to adapt to the changing market place to take advantage of, say, new distribution channels, new advertising and subscription products, and new ways of building a loyal community around your product. You know, the things that Google, Facebook and Twitter did... which your newspaper didn't do, perhaps because you fired a huge percentage of their staff and re-directed the money flow away from product and into your pocket.

Recent developments internationally, which will finally require online platforms to compensate the news industry are encouraging. I hope we can collaborate to move this issue forward in the United States in a fair and productive way. Just this month, April 2020, French antitrust regulators ordered Google to pay news publishers for displaying snippets of articles after years of helping itself to excerpts for its news service. As regulators in France said, "Google's practices caused a serious and immediate harm to the press sector, while the economic situation of publishers and news agencies is otherwise fragile." The Australian government also recently said that Facebook and Google would have to pay media outlets in the country for news content. The country's Treasurer, Josh Frydenberg noted "We can't deny the importance of creating a level playing field, ensuring a fair go for companies and the appropriate compensation for content."

We have, of course, written about both the plans in France as well as those in Australia (not to mention a similar push in Canada that Freeman apparently missed). Of course, what he's missing is... well, nearly everything. First, the idea that it's Google that's causing problems for the news industry is laughable on multiple fronts.

If newspapers feel that Google is causing them harm by linking to them and sending them traffic, then they can easily block Google, which respects robots.txt restrictions. I don't see Freeman's newspaper doing that. Second, in most of the world, Google does not monetize its Google News aggregation service, so the idea that it's someone making money off of "their" news, is not supported by reality. Third, the idea that "the news" is "owned" by the news organizations is not just laughable, but silly. After all, the news orgs are not making the news. If Freeman is going to claim that news orgs should be compensated for "their" news, then, uh, shouldn't his news orgs be paying the actual people who make the news that they're reporting on? Or is he saying that journalism is somehow special?

Finally, and most importantly, he says all of this as if we haven't seen how these efforts play out in practice. When Germany passed a similar law, Google ended up removing snippets only to be told they had to pay anyway. Google, correctly, said that if it had to license snippets, it would offer a price of $0, or it would stop linking to the sites -- and the news orgs agreed. In Spain, where Google was told it couldn't do this, the company shut down Google News and tons of smaller publications were harmed, not helped, but this policy.

This surely sounds familiar to all of us. It's been more than a decade since Rupert Murdoch instinctively observerd: "There are those who think they have a right to take our news content and use it for their own purposes without contributing a penny to its production... Their almost wholesale misappropriation of our stories is not fair use. To be impolite, it's theft."

First off, it's not theft. As we pointed out at the time, Rupert Murdoch, himself, at the very time he was making these claims, owned a whole bunch of news aggregators himself. The problem was never news aggregators. The problem has always been that other companies are successful on the internet and Rupert Murdoch was not. And, again, the whole "misappropriation" thing is nonsense: any news site is free to block Google's scrapers and if it's "misappropriation" to send you traffic, why do all of these news organizations employ "search engine optimizers" who work to get their sites higher in the rankings? And, yet again, are they paying the people who make the actual news? If not, then it seems like they're full of shit.

With Facebook and Google recently showing some contrition by launching token programs that provide a modest amount of funding, it's heartening to see that the tech giants are beginning to understand their moral and social responsibility to support and safeguard local journalism.

Spare me the "moral and social responsibility to support and safeguard local journalism," Heath. You're the one who cut 1,000 journalism jobs down to 150. Not Google. You're the one who took profitable newspapers that were investing in local journalism, fired a huge number of their reporters and staff, and redirected the even larger profits into your pockets instead of local journalism.

Even if someone wants to argue this fallacy, it should not be you, Heath.

Facebook created the Facebook Journalism Project in 2017 "to forge stronger ties with the news industry and work with journalists and publishers." If Facebook and the other tech behemoths are serious about wanting to "forge stronger ties with the news industry," that will start with properly remunerating the original producers of content.

Remunerating the "original producers"? So that means that Heath is now agreeing to compensate the people who create the news that his remaining reporters write up? Oh, no? He just means himself -- the middleman -- being remunerated directly into his pocket while he continues to cut jobs from his newsroom while raking in record profits? That seems... less compelling.

Facebook, Google, Twitter, Apple News and other online aggregators make billions of dollars annually from original, compelling content that our reporters, photographers and editors create day after day, hour after hour. We all know the numbers, and this one underscores the value of our intellectual property: The New York Times reported that in 2018, Google alone conservatively made $4.7 billion from the work of news publishers. Clearly, content-usage fees are an appropriate and reasonable way to help ensure newspapers exist to provide communities across the country with robust high-quality local journalism.

First of all, the $4.7 billion is likely nonsense, but even if it were accurate, Google is making that money by sending all those news sites a shit ton of traffic. Why aren't they doing anything reasonable to monetize it? And, of course, Digital First Media has bragged about its profitability, and leaked documents suggest its news business brought in close to a billion dollars in 2017 with a 17% operating margin, significantly higher than all other large newspaper chains.

This is nothing more than "Google has money, we want more money, Google needs to give us the money." There is no "clearly" here and "usage fees" are nonsense. If you don't want Google's traffic, put up robots.txt. Google will survive, but your papers might not.

One model to consider is how broadcast television stations, which provide valuable local news, successfully secured sizable retransmission fees for their programming from cable companies, satellite providers and telcos.

There are certain problems with retransmission fees in the first place (given that broadcast television was, by law, freely transmitted over the air in exchange for control over large swaths of spectrum), and the value they got was in having a large audience to advertise too. But, more importantly, retransmission involved taking an entire broadcast channel and piping it through cable and satellite to make things easier for TV watchers who didn't want to switch between an antenna and a cable (or satellite receiver). An aggregator is not -- contrary to what one might think reading Freeman's nonsense -- retransmitting anything. It's linking to your content and sending you traffic on your own site. The only things it shows are a headline and (sometimes) a snippet to attract more traffic.

There are certainly other potential options worth of our consideration -- among them whether to ask Congress about revisiting thoughtful limitations on "Fair Use" of copyrighted material, or seeking judicial review of how our trusted content is misused by others for their profit. By beginning a collective dialogue on these topics we can bring clarity around the best ways to proceed as an industry.

Ah, yes, let's throw fair use -- the very thing that news orgs regularly rely on to not get sued into the ground -- out the window in an effort to get Google to funnel extra money into Heath Freeman's pockets. That sounds smart. Or the other thing. Not smart.

And "a collective dialogue" in this sense appears to be collusion. As in an antitrust violation. Someone should have maybe mentioned that to Freeman.

Our newspaper brands and operations are the engines that power trust local news in communities across the United States.

Note that it's the brands and operations -- not journalists -- that he mentions here. That's a tell.

Fees from those who use and profit from our content can help continually optimize our product as well as ensure our newsrooms have the resources they need.

Again, Digital First Media, is perhaps the most profitable newspaper chain around. And it just keeps laying off reporters.

My hope is that we are able to work together towards the shared goal of protecting and enhancing local journalism.

You first, Heath, you first.

So, basically, Heath Freeman, who has spent decade or so buying up profitable newspapers, laying off a huge percentage of their newsrooms, leaving a shell of a husk in their place, then redirecting the continued profits (often that exist solely because of the legacy brand) into his own pockets rather than in journalism... wants the other newspapers to collude with him to force successful internet companies who send their newspapers a ton of free traffic to pay him money for the privilege of sending them traffic.

Sounds credible.




ri

Appeals Court Says Prosecutors Who Issued Fake Subpoenas To Crime Victims Aren't Shielded By Absolute Immunity

For years, the Orleans Parish District Attorney's Office in Louisiana issued fake subpoenas to witnesses and crime victims. Unlike subpoenas used in ongoing prosecutions, these were used during the investigation process to compel targets to talk to law enforcement. They weren't signed by judges or issued by court clerks but they did state in bold letters across the top that "A FINE AND IMPRISONMENT MAY BE OPPOSED FOR FAILURE TO OBEY THIS NOTICE."

Recipients of these bogus subpoenas sued the DA's office. In early 2019, a federal court refused to grant absolute immunity to the DA's office for its use of fake subpoenas to compel cooperation from witnesses. The court pointed out that issuing its own subpoenas containing threats of imprisonment bypassed an entire branch of the government to give the DA's office power it was never supposed to have.

Allegations that the Individual Defendants purported to subpoena witnesses without court approval, therefore, describe more than a mere procedural error or expansion of authority. Rather, they describe the usurpation of the power of another branch of government.

The court stated that extending immunity would be a judicial blessing of this practice, rather than a deterrent against continued abuse by the DA's office.

The DA's office appealed. The Fifth Circuit Appeals Court took the case, but it seemed very unimpressed by the office's assertions. Here's how it responded during oral arguments earlier this year:

“Threat of incarceration with no valid premise?” Judge Jennifer Elrod said at one point during arguments. She later drew laughter from some in the audience when she said, “This argument is fascinating.”

“These are pretty serious assertions of authority they did not have,” said Judge Leslie Southwick, who heard arguments with Elrod and Judge Catharina Haynes.

The Appeals Court has released its ruling [PDF] and it will allow the lawsuit to proceed. The DA's office has now been denied immunity twice. Absolute immunity shields almost every action taken by prosecutors during court proceedings. But these fake subpoenas were sent to witnesses whom prosecutors seemingly had no interest in ever having testify in court. This key difference means prosecutors will have to face the state law claims brought by the plaintiffs.

Based upon the pleadings before us at this time, it could be concluded that Defendants’ creation and use of the fake subpoenas was not “intimately associated with the judicial phase of the criminal process,” but rather fell into the category of “those investigatory functions that do not relate to an advocate’s preparation for the initiation of a prosecution or for judicial proceedings.” See Hoog-Watson v. Guadalupe Cty., 591 F.3d 431, 438 (5th Cir. 2009)

[...]

Defendants were not attempting to control witness testimony during a break in judicial proceedings. Instead, they allegedly used fake subpoenas in an attempt to pressure crime victims and witnesses to meet with them privately at the Office and share information outside of court. Defendants never used the fake subpoenas to compel victims or witnesses to testify at trial. Such allegations are of investigative behavior that was not “intimately associated with the judicial phase of the criminal process.”

Falling further outside the judicial process was the DA's office itself, which apparently felt the judicial system didn't need to be included in its subpoena efforts.

In using the fake subpoenas, Individual Defendants also allegedly intentionally avoided the judicial process that Louisiana law requires for obtaining subpoenas.

The case returns to the lower court where the DA's office will continue to face the state law claims it hoped it would be immune from. The Appeals Court doesn't say the office won't ultimately find some way to re-erect its absolute immunity shield, but at this point, it sees nothing on the record that says prosecutors should be excused from being held responsible for bypassing the judicial system to threaten crime victims and witnesses with jail time.




ri

Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.

The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.

I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.




ri

Fans Port Mario 64 To PC And Make It Way Better, So Of Course Nintendo Is Trying To Nuke The Project

I'm lucky enough to own a decades old Nintendo 64 and a handful of games, including the classic Mario 64. My kids love that game. Still, the first thing they asked when I showed it to them the first time is why the screen was letterboxed, why the characters looked like they were made of lego blocks, and why I needed weird cords to plug it all into the flat screen television. The answer to these spoiled monsters' questions, of course, is that the game is super old and wasn't meant to be played on modern televisions. It's the story of a lot of older games, though many PC games at least have a healthy modding community that will take classics and get them working on present day hardware. Consoles don't have that luxury.

Well, usually, that is. It turns out that enough folks were interested in modernizing Mario 64 that a group of fans managed to pull off porting it to PC. And, because this is a port and not emulation, they managed to update it to run in 4k graphics and added a ton of modern visual effects.

Last year, Super Mario 64's N64 code was reverse-engineered by fans, allowing for all kinds of new and exciting things to be done with Nintendo’s 1996 classic. Like building a completely new PC port of the game, which can run in 4K and ultra-wide resolutions.

This is a very new and cool thing! Previously, if you were playing Super Mario 64 on PC, you were playing via emulation, as your PC ran code pretending to be an N64. This game is made specifically for the PC, built from the ground up, meaning it not only runs like a dream, but even supports mod stuff like ReShade, allowing for graphical tweaks (like the distance blur seen here).

As you'll see, the video the Kotaku post is referencing can't be embedded here because Nintendo already took it down. Instead, I'll use another video that hasn't been taken down at the time of this writing, so you can see just how great this looks.

In addition to videos of the project, Nintendo has also been busy firing off legal salvos to get download links for the PC port of the game taken down from wherever it can find them. Now, while Nintendo's reputation for IP protectionism is such that it would almost certainly take this fan project down under virtually any circumstances, it is also worth noting that the company has a planned re-release of Mario 64 for its latest Nintendo console. That likely only supercharged the speed with which it is trying to disappear this labor of love from fans of an antiquated game that have since moved on to gaming on their PCs.

But why should the company do this? Nintendo consoles are known for many things, including user-friendly gaming and colorful games geared generally towards younger audiences. You know, exactly not the people who would take it on themselves to get an old Mario game working on their PC instead of a Nintendo console. What threat does this PC port from fans represent to Nintendo revenue? It's hard to imagine that threat is anything substantial.

And, yet, here we are anyway. Nintendo, after all, doesn't seem to be able to help itself.




ri

No, Congress Can't Fix The Broken US Broadband Market In A Mad Dash During A Pandemic

COVID-19 has shone a very bright light on the importance of widely available, affordable broadband. Nearly 42 million Americans lack access to any broadband whatsoever--double FCC estimates. And millions more can't afford service thanks to a lack of competition among very powerful, government pampered telecom monopolies.

As usual, with political pressure mounting to "do something," DC's solution is going to be to throw more money at the problem:

"The plan unveiled Thursday would inject $80 billion over five years into expansion of broadband infrastructure into neglected rural, suburban and urban areas, with an emphasis on communities with high levels of poverty. It includes measures to promote rapid building of internet systems, such as low-interest financing for infrastructure projects."

To be clear, subsidies often do help shore up broadband availability at coverage. The problem is that the United States government, largely captured by telecom giants with a vested interest in protecting regional monopolies, utterly sucks at it.

Despite ample pretense to the contrary, nobody in the US government actually knows where broadband is currently available. Data supplied by ISPs has never been rigorously fact-checked by a government fearful of upsetting deep-pocketed campaign contributors (and valued NSA partners). As a result, our very expensive ($350 million at last count) FCC broadband coverage map creates a picture of availability and speed that's complete fantasy. It's theater designed to disguise the fact that US broadband is mediocre on every broadband metric that matters. Especially cost.

While there has been some effort to fix the mapping problem via recent legislation, the FCC still needs several years (and more money) to do so. And while you'd think this would be more obvious, you can't fix a problem you can't even effectively measure. There's also not much indication that the $80 billion, while potentially well intentioned, would actually get where it needs to go. Especially right now, when federal oversight is effectively nonexistent.

You may or may not have noticed this, but US telecom is a corrupt, monopolized mess. Giants like AT&T and Comcast all but own state and federal legislatures and, in many instances, literally write the law. Feckless regulators bend over backward to avoid upsetting deep-pocketed campaign contributors. So when subsidies are doled out, they very often don't end up where regulators and lawmakers intended. There's an endless ocean of examples where these giants took billions in taxpayer subsidies to deploy fiber networks that are never fully delivered.

If you were to do meaningful audit (which we've never done because again we're not willing to adequately track the problem or stand up to dominant incumbent corporations) you'd very likely find that American taxpayers already paid for fiber to every home several times over.

That's not to say is that there aren't things Congress could do to help the disconnected during COVID-19. Libraries for example have been begging the FCC for the ability to offer expanded WiFi hotspot access (via mobile school buses) to disconnected communities without running afoul of FCC ERate rules. But while the FCC said libraries can leave existing WiFi on without penalty, it has been mute about whether they can extend coverage outside of library property. Why? As a captured agency, the FCC doesn't like anything that could potentially result in Comcast or AT&T making less money.

None of this is to say that we shouldn't subsidize broadband deployment once we get a handle on the mapping problem. But it's a fantasy to think we're going to immediately fix a 30 year old problem with an additional $80 billion in a mad dash during a pandemic. US broadband dysfunction was built up over decades. It's the product of corruption and rot that COVID-19 is exposing at every level of the US government. The only way to fix it is to stand up to industry, initiate meaningful reform, adopt policies that drive competition to market, and jettison feckless lawmakers and regulators whose dominant motivation is in protecting AT&T, Verizon, Comcast, and Spectrum revenues.

Maybe the pandemic finally provides the incentive to actually do that, but until the US does, these subsidization efforts are largely theater.




ri

Sketchy Gets Sketchier: Senator Loeffler Received $9 Million 'Gift' Right Before She Joined The Senate

Kelly Loeffler is, by far, the wealthiest elected official in Congress, with an estimated net worth of half a billion dollars (the second wealthiest is Montana Rep. Greg Gianforte (famous for his body slamming a journalist for asking him a question and then lying to the police about it)). Loeffler may be used to getting away with tearing up the red tape in her previous life, but in Congress, that often looks pretty corrupt. In just the last few months since she was appointed, there were concerns about her stock sales and stock purchases, which seemed oddly matched to information she was getting during briefings regarding the impact of COVID-19. She has since agreed to convert all her stock holdings to managed funds outside of her control (something every elected official should do, frankly).

Now, the NY Times is noting another form of what we've referred to as "soft corruption" -- moves that might technically be legal, but which sure look sketchy as hell to any regular non-multimillionaire elected official. In this case, Senator Loeffler received what was, in effect, a gift worth $9 million from her former employer, Intercontinental Exchange (the company that runs the NY Stock Exchange, and where her husband is the CEO).

The key issue was that since she was leaving the job to go join the Senate, she had a bunch of unvested stock. For normal people, if you leave a job before your stock vests, too bad. That's the deal. The vesting period is there for a reason. But for powerful, rich people, apparently the rules change. Intercontinental Exchange changed the rules to grant her the compensation that she wasn't supposed to get, because why not?

Ms. Loeffler, who was appointed to the Senate in December and is now in a competitive race to hold her seat, appears to have received stock and other awards worth more than $9 million from the company, Intercontinental Exchange, according to a review of securities filings by The New York Times, Ms. Loeffler’s financial disclosure form and interviews with compensation and accounting experts. That was on top of her 2019 salary and bonus of about $3.5 million.

The additional compensation came in the form of shares, stock options and other instruments that Ms. Loeffler had previously been granted but was poised to forfeit by leaving the company. Intercontinental Exchange altered the terms of the awards, allowing her to keep them. The largest component — which the company had previously valued at about $7.8 million — was a stake in an Intercontinental Exchange subsidiary that Ms. Loeffler had been running.

The entitlement factor oozes out of the statement put out from her office in response to this:

“Kelly left millions in equity compensation behind to serve in public office to protect freedom, conservative values and economic opportunity for all Georgians,” said Stephen Lawson, a spokesman for Ms. Loeffler. “The obsession of the liberal media and career politicians with her success shows their bias against private sector opportunity in favor of big government.”

No, Stephen, that's not the issue. The issue is that normal people who haven't vested yet, don't get to have the board change the vesting rules as you're leaving to go legislate in order to give you a $9 million windfall you didn't earn because it hadn't vested. If it had just been a question of compensation, no one would be complaining. If she had played by the rules that everyone else played by, lived up to her end of the contract and vested the equity, then no big deal. The problem is the last minute changing of the rules to get her a pretty massive payout (perhaps not by her standards, but by anyone else's).

Indeed, the details show that this wasn't just a timing thing, like a standard vesting deal, but that Loeffler was supposed to reach certain milestones to be able to get the equity. She didn't, but she still gets it. That's the part that has people concerned.

In February 2019, Intercontinental Exchange gave Ms. Loeffler a stake in a limited liability company that owned a stake in Bakkt, according to a March 2019 securities filing. The company at the time estimated the award was worth $15.6 million. But Ms. Loeffler would be able to cash in on the award only under certain circumstances, including if Bakkt’s value soared or if it became a publicly traded company.

When Ms. Loeffler stepped down from the company less than 10 months later, she was poised to forfeit much of that Bakkt stake. But Intercontinental Exchange sped up the vesting process so that she got half of it immediately.

The company, of course, puts a nice spin on it, saying "We admire Kelly’s decision to serve her country in the U.S. Senate and did not want to discourage that willingness to serve,” but what else are they going to say anyway?

Still waiting for that supposed swamp draining we keep hearing about.




ri

As More Students Sit Online Exams Under Lockdown Conditions, Remote Proctoring Services Carry Out Intrusive Surveillance

The coronavirus pandemic and its associated lockdown in most countries has forced major changes in the way people live, work and study. Online learning is now routine for many, and is largely unproblematic, not least because it has been used for many years. However, online testing is more tricky, since there is a concern by many teachers that students might use their isolated situation to cheat during exams. One person's problem is another person's opportunity, and there are a number of proctoring services that claim to stop or at least minimize cheating during online tests. One thing they have in common is that they tend to be intrusive, and show little respect for the privacy of the people they monitor.

As an article in The Verge explains, some employ humans to watch over students using Zoom video calls. That's reasonably close to a traditional setup, where a teacher or proctor watches students in an exam hall. But there are also webcam-based automated approaches, as explored by Vox:

For instance, Examity also uses AI to verify students' identities, analyze their keystrokes, and, of course, ensure they're not cheating. Proctorio uses artificial intelligence to conduct gaze detection, which tracks whether a student is looking away from their screens.

It's not just in the US that these extreme surveillance methods are being adopted. In France, the University of Rennes 1 is using a system called Managexam, which adds a few extra features: the ability to detect "inappropriate" Internet searches by the student, the use of a second screen, or the presence of another person in the room (original in French). The Vox articles notes that even when these systems are deployed, students still try to cheat using new tricks, and the anti-cheating services try to stop them doing so:

it's easy to find online tips and tricks for duping remote proctoring services. Some suggest hiding notes underneath the view of the camera or setting up a secret laptop. It's also easy for these remote proctoring services to find out about these cheating methods, so they're constantly coming up with countermeasures. On its website, Proctorio even has a job listing for a "professional cheater" to test its system. The contract position pays between $10,000 and $20,000 a year.

As the arms race between students and proctoring services escalates, it's surely time to ask whether the problem isn't people cheating, but the use of old-style, analog testing formats in a world that has been forced by the coronavirus pandemic to move to a completely digital approach. Rather than spending so much time, effort and money on trying to stop students from cheating, maybe we need to come up with new ways of measuring what they have learnt and understood -- ones that are not immune to cheating, but where cheating has no meaning. Obvious options include "open book" exams, where students can use whatever resources they like, or even abolishing formal exams completely, and opting for continuous assessment. Since the lockdown has forced educational establishments to re-invent teaching, isn't it time they re-invented exams too?

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.




ri

It's Not Even Clear If Remdesivir Stops COVID-19, And Already We're Debating How Much It Can Price Gouge

You may recall in the early days of the pandemic, that pharma giant Gilead Sciences -- which has been accused of price gouging and (just last year!) charging exorbitant prices on drug breakthroughs developed with US taxpayer funds -- was able to sneak through an orphan works designation for its drug remdesevir for COVID-19 treatment. As we pointed out, everything about this was insane, given that orphan works designations, which give extra monopoly rights to the holders (beyond patent exclusivity), are meant for diseases that don't impact a large population. Gilead used a loophole: since the ceiling for infected people to qualify for orphan drug status is 200,000, Gilead got in its application bright and early, before there were 200,000 confirmed cases (we currently have over 1.3 million). After the story went, er... viral, Gilead agreed to drop the orphan status, realizing the bad publicity it was receiving.

After a brief dalliance with chloroquine, remdesivir has suddenly been back in demand as the new hotness of possible COVID-19 treatments. Still, a close reading of the research might give one pause. There have been multiple conflicting studies, and Gilead's own messaging has been a mess.

On April 23, 2020, news of the study’s failure began to circulate. It seems that the World Health Organization (WHO) had posted a draft report about the trial on their clinical trials database, which indicated that the scientists terminated the study prematurely due to high levels of adverse side effects.

The WHO withdrew the report, and the researchers published their results in The Lancet on April 29, 2020.

The number of people who experienced adverse side effects was roughly similar between those receiving remdesivir and those receiving a placebo. In 18 participants, the researchers stopped the drug treatment due to adverse reactions.

But then...

However, also on April 29, 2020, the National Institute of Allergy and Infectious Diseases (NIAID) announced that their NIH trial showed that remdesivir treatment led to faster recovery in hospital patients with COVID-19, compared with placebo treatment.

“Preliminary results indicate that patients who received remdesivir had a 31% faster time to recovery than those who received placebo,” according to the press release. “Specifically, the median time to recovery was 11 days for patients treated with remdesivir compared with 15 days for those who received placebo.”

The mortality rate in the remdesivir treatment group was 8%, compared with 11.6% in the placebo group, indicating that the drug could improve a person’s chances of survival. These data were close to achieving statistical significance.

And then...

“In addition, there is another Chinese trial, also stopped because the numbers of new patients with COVID-19 had fallen in China so they were unable to recruit, which has not yet published its data,” Prof. Evans continues. “There are other trials where remdesivir is compared with non-remdesivir treatments currently [being] done and results from some of these should appear soon.”

Gilead also put out its own press release about another clinical trial, which seems more focused on determining the optimal length of remdesivir treatment. Suffice it to say, there's still a lot of conflicting data and no clear information on whether or not remdesevir actually helps.

Still, that hasn't stopped people from trying to figure out just how much Gilead will price gouge going forward:

The Institute for Clinical and Economic Review (ICER), which assesses effectiveness of drugs to determine appropriate prices, suggested a maximum price of $4,500 per 10-day treatment course based on the preliminary evidence of how much patients benefited in a clinical trial. Consumer advocacy group Public Citizen on Monday said remdesivir should be priced at $1 per day of treatment, since “that is more than the cost of manufacturing at scale with a reasonable profit to Gilead.”

Some Wall Street investors expect Gilead to come in at $4,000 per patient or higher to make a profit above remdesivir’s development cost, which Gilead estimates at about $1 billion.

So... we've got a range of $10 to $4,500 on a treatment that we don't yet know works, and which may or may not save lives. But, given that we're in the midst of a giant debate concerning things like "reopening the economy" -- something that can really only be done if the public is not afraid of dying (or at least becoming deathly ill) -- the value to the overall economy seems much greater than whatever amount Gilead wants to charge. It seems the right thing to do -- again, if it's shown that remdesevir actually helps -- is to just hand over a bunch of money to Gilead, say "thank you very much" and get the drug distributed as widely as possible. Though, again, it should be noted that a decent chunk of the research around remdesevir was not done or paid for by Gilead, but (yet again) via public funds to public universities, which did the necessary research. The idea that it's Gilead that should get to reap massive rewards for that seems sketchy at best. But the absolute worst outcome is one in which Gilead sticks to its standard operating procedure and prices the drug in a way that millions of Americans can't afford it, and it leads to a prolonging/expanding of the pandemic.




ri

Court Of Appeals Affirms Lower Court Tossing BS 'Comedians In Cars' Copyright Lawsuit

Six months ago, which feels like roughly an eternity at this point, we discussed how Jerry Seinfeld and others won an absolutely ludicrous copyright suit filed against them by Christian Charles, a writer and director Seinfeld hired to help him create the pilot episode of Comedians In Cars Getting Coffee. What was so strange about the case is that this pilot had been created in 2012, whereas the lawsuit was only filed in 2018. That coincides with Seinfeld inking a lucrative deal with Netflix to stream his show.

It's not the most well known aspect of copyright law, but there is, in fact, a statute of limitations for copyright claims and it's 3 years. The requirement in the statute is that the clock essentially starts running once someone who would bring a copyright claim has had their ownership of a work disputed publicly, or has been put on notice. Seinfeld argued that he told Charles he was employing him in a work-for-hire arrangement, which would satisfy that notice. His lawyers also pointed out that Charles goes completely uncredited in the pilot episode, which would further put him on notice. The court tossed the case based on the statute of limitations.

For some reason, Charles appealed the ruling. Well, now the Court of Appeals has affirmed that lower ruling, which hopefully means we can all get back to not filing insane lawsuits, please.

We conclude that the district court was correct in granting defendants’ motion to dismiss, for substantially the same reasons that it set out in its well-reasoned opinion. The dispositive issue in this case is whether Charles’s alleged “contributions . . . qualify [him] as the author and therefore owner” of the copyrights to the show. Kwan, 634 F.3d at 229. Charles disputes that his claim centers on ownership. But that argument is seriously undermined by his statements in various filings throughout this litigation which consistently assert that ownership is a central question.

Charles’s infringement claim is therefore time-barred because his ownership claim is time-barred. The district court identified two events described in the Second Amended Complaint that would have put a reasonably diligent plaintiff on notice that his ownership claims were disputed. First, in February 2012, Seinfeld rejected Charles’s request for backend compensation and made it clear that Charles’s involvement would be limited to a work-for-hire basis. See Gary Friedrich Enters., LLC v. Marvel Characters, Inc., 716 F.3d 302, 318 (2d Cir. 2013) (noting that a copyright ownership claim would accrue when the defendant first communicates to the plaintiff that the defendant considers the work to be a work-for-hire). Second, the show premiered in July 2012 without crediting Charles, at which point his ownership claim was publicly repudiated. See Kwan, 634 F.3d at 227. Either one of these developments was enough to place Charles on notice that his ownership claim was disputed and therefore this action, filed six years later, was brought too late.

And that should bring this all to a close, hopefully. This seems like a pretty clear attempt at a money grab by Charles once Seinfeld's show became a Netflix cash-cow. Unfortunately, time is a measurable thing and his lawsuit was very clearly late.




ri

Can we use good works to determine if a person is a Christian? (Matthew 7:15-19)

In Matthew 7:15-19, Jesus tells His disciples how to tell good teachers from bad teachers. He tells them to look at the fruit. Is Jesus telling people to look at the lives of other teachers to see if they have good works? No! Not at all. Listen to the study to see what Jesus IS teaching and why this is important for properly understanding the gospel.




ri

Will all True Christians produce good fruit? (Matthew 13:22-24)

In Matthew 13:22-24, Jesus talk about he fourth soil in the Parable of the Four Soils, and says that only this fourth soil produces good fruit. Does this parable show us how to tell true Christians from false Christians, or how to know who truly has eternal life? No! Not at all. Listen to the study to see what Jesus IS teaching and why this is important for properly understanding the gospel.




ri

Hot times in the British Parliament

I should be explaining what's been going on in the British Parliament, with links and explanations. Unfortunately I can't, because...




ri

From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum

I recently travelled to Pittsburgh, USA, to present the paper “From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum” at eCrime 2019, co-authored with Ben Collier and Alice Hutchings. The accepted version of the paper can be accessed here. The structure and content of various underground … Continue reading From Playing Games to Committing Crimes: A Multi-Technique Approach to Predicting Key Actors on an Online Gaming Forum




ri

Identifying Unintended Harms of Cybersecurity Countermeasures

In this paper (winner of the eCrime 2019 Best Paper award), we consider the types of things that can go wrong when you intend to make things better and more secure. Consider this scenario. You are browsing through Internet and see a news headline on one of the presidential candidates. You are unsure if the … Continue reading Identifying Unintended Harms of Cybersecurity Countermeasures




ri

Three Paper Thursday: The role of intermediaries, platforms, and infrastructures in governing crime and abuse

The platforms, providers, and infrastructures which together make up the contemporary Internet play an increasingly central role in the business of governing human societies. Although the software engineers, administrators, business professionals, and other staff working at these organisations may not have the institutional powers of state organisations such as law enforcement or the civil service, … Continue reading Three Paper Thursday: The role of intermediaries, platforms, and infrastructures in governing crime and abuse



  • Three Paper Thursday

ri

Three Paper Thursday: Adversarial Machine Learning, Humans and everything in between

Recent advancements in Machine Learning (ML) have taught us two main lessons: a large proportion of things that humans do can actually be automated, and that a substantial part of this automation can be done with minimal human supervision. One no longer needs to select features for models to use; in many cases people are … Continue reading Three Paper Thursday: Adversarial Machine Learning, Humans and everything in between



  • Three Paper Thursday

ri

Three Paper Thursday: Exploring the Impact of Online Crime Victimization

Just as in other types of victimization, victims of cybercrime can experience serious consequences, emotional or not. First of all, a repeat victim of a cyber-attack might face serious financial or emotional hardship. These victims are also more likely to require medical attention as a consequence of online fraud victimization. This means repeat victims have a … Continue reading Three Paper Thursday: Exploring the Impact of Online Crime Victimization




ri

#441003 - Chorizo Tacos Recipe



This dinner recipe has become my new favorite! It's easy, fast and tastes as great as the local taqueria!

craving more? check out TasteSpotting




ri

#441008 - Tandoori Garlic Roti Flatbread Recipe



It’s time to try these homemade Tandoori Garlic Rotis. These delicious flatbreads are extremely easy to make and can easily be customized to suit everyone’s tastes.

craving more? check out TasteSpotting




ri

#441016 - Hibiscus Jalapeno Kargarita Cocktail Recipe



Hibiscus tea mixed with tequila, lime, jalapeno, and pineapple makes this one delicious cocktail!

craving more? check out TasteSpotting