en.swpat.org is a wiki.   You can edit it.   May contain statements End Software Patents does not endorse.

November 2014: About Microsoft’s patent licence for .NET core

SitemapCountriesWhy abolish?Law proposalsStudiesCase lawPatent office case lawLawsuits

User talk:Ssdclickofdeath

From en.swpat.org
Jump to: navigation, search


[edit] Welcome

Hi. Thanks for those corrections. If you've any questions about the wiki, you can ask me or put them on Talk:Discuss this wiki. Ciaran (talk) 12:17, 28 December 2013 (EST)

[edit] Things are going to get busy

Hi. Just so you know: the current low levels of activity on the wiki are because I'm studying for exams, but things should get a lot busier in a few days because work will begin on briefs to the Supreme Court for the CLS Bank v. Alice case. Ciaran (talk) 19:28, 5 January 2014 (EST)

[edit] Javascript and infoboxes

I wanted to check if that page worked with javascript disabled, but the site has now disappeared because their domain registration expired on 20 Feb 2014:

Do you have a copy of the presentation slides that were on that site? Or do you remember anything about the content which I could stick into search engines to look for that presentation on other sites?

I can't remember much about the site, but I remember thinking it might be useful to talk with the author some day.

Just FYI. Next week I'm going to do some work to add infoboxes to the pages about court rulings. Each page should have a link to the ruling, and where possible the hearing transcript, the Wikipedia page for the case, and whatever else is interesting. I haven't decided yet if we should have different boxes for each country, to cater for the different documents which can be linked to (some countries have hearing transcripts, some don't, some - like Germany - publish the rulings in two parts, most others in one), or if it would work better to just have one box with all the different possible fields, and have the box ignore the fields left blank. I'll check how Wikipedia handles this sort of thing.

The pages about court cases should have standard infoboxes too, but these won't be the same as the boxes for court rulings (mostly because one case can be appealed etc. and produce two, three, sometimes four rulings).

If you've any plans or specific areas you think could be improved, I might be able to help out. Ciaran (talk) 15:06, 21 February 2014 (EST)

The domain redirected to a Google Docs page, but I don't remember the exact name of the document. I looked through the page source to see if there was a direct download link to the document, but there wasn't. All the page said was something like "JavaScript is required to view this page". I have JavaScript disabled because most JS code on websites is proprietary, and didn't want to run Google's "obfuscript". (What's your view on nonfree JavaScript?) Ssdclickofdeath (talk) 17:23, 23 February 2014 (EST)
I remembered the name and found a page with an alternative link and a youtube video of the presentation: http://wetnet.net/content/software-unpatentable
But you're right that we need to address the issue of non-free javascript. For the ESP main site, the policy is that I don't link to pages that require non-free javascript. But for the wiki, I'm less strict because I view the wiki as being the "development area" for primary documents (e.g. the pages on the main site, or documents written as part of a campaign or for a court case). So one of the uses of the wiki is that anyone can add a link and say "Hey, does anyone know how or where I can download this document without proprietary javascript?"
The issue is also complicated by the fact that some web pages which "require" non-free javascript, can be used by specialised free software programs. Example: youtube.com doesn't work in a browser without javascript, but youtube.com links are still useful for free software users because youtube-dl can download the video. (I wonder, is there is free software for downloading documents from Google Docs?)
I also think it's ok to enable javascript and run non-free javascript when trying to find a way to give other people access to that document without them running the non-free javascript (e.g. enable javascript on a Google Docs page so you can see the document, and then put the title or keywords into a search engine and try to find the same document on another site that doesn't require non-free javascript.)
Links to pages requiring non-free JS also sometimes get added because I add links without clicking on the link, or I add links while using someone else's computer, so I don't notice the problem. When that happens, it's a bug.
So, my proposal is:
  • Make a template which can be put beside any links that require non-free javascript (e.g. "The linked webpage includes non-free javascript and isn't functional when this is disabled. Can you help us find a copy of this document without this problem?")
  • Make separate templates for sites like youtube.com, informing people how to "use" the link without running the non-free javascript
What do you think? Do you know any other sites, like youtube, that don't work with a browser but can still be used with specialised free software? Ciaran (talk) 19:58, 23 February 2014 (EST)

For some reason I assumed it was a PDF in a JavaScript viewer like pdf.js, which is built into Firefox. The text is actually in the page source, with some strange, likely proprietary formatting. (I hope it's not um, patented!)

Why I consider Cloud-dependent packages as non-free software ?\u000b\u000bDebian GNU/Linux 6.0 “Squeeze” ships packages [1] that integrate with web services\u000b(called in modern term \u0027Cloud Computing\u0027 or SaaS, \u0027Software-as-a-Service\u0027 if you will), such as the Facebook API and Twitter plug-ins.\u000bWhat if Facebook decides to close down it\u0027s APIs tomorrow ?\u000bWill Debian drop those packages from 6.0-stable release ?\u000b\u000bI\u0027m not saying such packages must not exist. They should.\u000b\u000bBut (!) those packages interface non-free web services, which is\u000bpolitically no different than non-free software. Technically even worse, because web-software is likely to break at any moment, change APIs, or close down free access to it, and demand either NDA contracts(...)

(See the source of here for more.)

I just looked at github for an appropriate tool, and the only one I found was effectively proprietary, because it had no license. Ssdclickofdeath (talk) 23:14, 23 February 2014 (EST)

Maybe the author just didn't think about adding a licence. If you ask, they might add a free licence. Ciaran (talk) 08:57, 15 May 2015 (EDT)
I have asked on some other project's pages and I never received a reply, but on another repo, the author did add a license when I asked, and accepted a contribution too. It seems as if some people push some code to GitHub and move along with their life.
Anyway, it doesn't really matter to me right now, I haven't needed to read a file on Google Docs lately. Ssdclickofdeath (talk) 22:41, 15 May 2015 (EDT)

[edit] I'm on the spam now


Thanks for clearing that spam. I should have gotten to it sooner but I'm on it now. Ciaran (talk) 11:03, 8 May 2015 (EDT)

You're welcome. I wonder if there are any captchas that are harder for robots to solve. Unless the spammers are human, that is. Ssdclickofdeath (talk) 14:08, 14 May 2015 (EDT)

First some background, then two solutions I'm considering:
Background: We were targetted by a spambot (all our spam had the same style), so I installed QuestyCaptch, with captcha questions such as "what colour is grass?". The spam level dropped to almost zero, but there were still one or two pieces of spam per week with that same style. This must have been humans because, firstly, I don't think any 2012 spambot system could answer a question about grass, and secondly, if a spambot could answer that question then its spam should all get through rather than just once a week.
But with such simple questions, why would humans only beat the captcha once a month? My guess is that this work is shared between many people in some country where almost no one (at that wage level) speaks English.
Then, recently, the spam levels shot up. I think this was due to a minor advance in spambot technology: They're probably saving the questions and answers somewhere, so once one of their people beats the question, then 100% of the spam flood gets past the captcha.
The current (temporary) solution is that I've made account creation impossible. All our spam comes from accounts rather than IP adresses, so we're getting zero spam. Preventing account creating was actually an accidental side-effect of playing with MediaWiki:Titleblacklist. I don't want account creating to be blocked, but I was very busy and this stopped the spam, so it has stayed that way until now.
Solution 1: Change the captcha questions frequently. Currently I have to ask the sysadmins to modify LocalSettings.php each time I want to change the questions, but I don't like bothering them. To fix this I could ask them to install QuestyPage (which also requires Lockdown or NamespaceReadRestrict but I don't know which is better).
Solution 2: Replace the current login page with something that the spambots don't know how to use. For example, we could set up a single sign-on system so that people can make an account which works on the wiki and on the main wordpress site. This would only address the spam from accounts, but since that's currently where 100% of our spam comes from, this will be enough (for now at least).
Or maybe we'll do both. If you've any suggestions, I'm all ears :) Ciaran (talk) 17:52, 14 May 2015 (EDT)
I found a captcha generator called Simplecaptcha. It's written in Java and released under "the" BSD license. I don't know if that would work or not, but it may be worth looking at. Ssdclickofdeath (talk) 22:58, 15 May 2015 (EDT)
Thanks. It would probably work very well. The spambot networks put a lot of effort into automatically beating the widely-used captcha systems, but they might never have looked at Simplecaptcha. ...but, to keep upgrades simple, my preference is for solutions that require as few new packages as possible or which are widely used in the Mediawiki community. But I'll keep Simplecaptcha as a backup idea. Ciaran (talk) 06:17, 19 May 2015 (EDT)