Things are going to get busy
Hi. Just so you know: the current low levels of activity on the wiki are because I'm studying for exams, but things should get a lot busier in a few days because work will begin on briefs to the Supreme Court for the CLS Bank v. Alice case. Ciaran (talk) 19:28, 5 January 2014 (EST)
Do you have a copy of the presentation slides that were on that site? Or do you remember anything about the content which I could stick into search engines to look for that presentation on other sites?
I can't remember much about the site, but I remember thinking it might be useful to talk with the author some day.
Just FYI. Next week I'm going to do some work to add infoboxes to the pages about court rulings. Each page should have a link to the ruling, and where possible the hearing transcript, the Wikipedia page for the case, and whatever else is interesting. I haven't decided yet if we should have different boxes for each country, to cater for the different documents which can be linked to (some countries have hearing transcripts, some don't, some - like Germany - publish the rulings in two parts, most others in one), or if it would work better to just have one box with all the different possible fields, and have the box ignore the fields left blank. I'll check how Wikipedia handles this sort of thing.
The pages about court cases should have standard infoboxes too, but these won't be the same as the boxes for court rulings (mostly because one case can be appealed etc. and produce two, three, sometimes four rulings).
- I remembered the name and found a page with an alternative link and a youtube video of the presentation: http://wetnet.net/content/software-unpatentable
- Links to pages requiring non-free JS also sometimes get added because I add links without clicking on the link, or I add links while using someone else's computer, so I don't notice the problem. When that happens, it's a bug.
- So, my proposal is:
- What do you think? Do you know any other sites, like youtube, that don't work with a browser but can still be used with specialised free software? Ciaran (talk) 19:58, 23 February 2014 (EST)
Why I consider Cloud-dependent packages as non-free software ?\u000b\u000bDebian GNU/Linux 6.0 “Squeeze” ships packages  that integrate with web services\u000b(called in modern term \u0027Cloud Computing\u0027 or SaaS, \u0027Software-as-a-Service\u0027 if you will), such as the Facebook API and Twitter plug-ins.\u000bWhat if Facebook decides to close down it\u0027s APIs tomorrow ?\u000bWill Debian drop those packages from 6.0-stable release ?\u000b\u000bI\u0027m not saying such packages must not exist. They should.\u000b\u000bBut (!) those packages interface non-free web services, which is\u000bpolitically no different than non-free software. Technically even worse, because web-software is likely to break at any moment, change APIs, or close down free access to it, and demand either NDA contracts(...)
(See the source of here for more.)
I just looked at github for an appropriate tool, and the only one I found was effectively proprietary, because it had no license. Ssdclickofdeath (talk) 23:14, 23 February 2014 (EST)
- Maybe the author just didn't think about adding a licence. If you ask, they might add a free licence. Ciaran (talk) 08:57, 15 May 2015 (EDT)
- I have asked on some other project's pages and I never received a reply, but on another repo, the author did add a license when I asked, and accepted a contribution too. It seems as if some people push some code to GitHub and move along with their life.
- Anyway, it doesn't really matter to me right now, I haven't needed to read a file on Google Docs lately. Ssdclickofdeath (talk) 22:41, 15 May 2015 (EDT)
 I'm on the spam now
You're welcome. I wonder if there are any captchas that are harder for robots to solve. Unless the spammers are human, that is. Ssdclickofdeath (talk) 14:08, 14 May 2015 (EDT)
- First some background, then two solutions I'm considering:
- Background: We were targetted by a spambot (all our spam had the same style), so I installed QuestyCaptch, with captcha questions such as "what colour is grass?". The spam level dropped to almost zero, but there were still one or two pieces of spam per week with that same style. This must have been humans because, firstly, I don't think any 2012 spambot system could answer a question about grass, and secondly, if a spambot could answer that question then its spam should all get through rather than just once a week.
- But with such simple questions, why would humans only beat the captcha once a month? My guess is that this work is shared between many people in some country where almost no one (at that wage level) speaks English.
- Then, recently, the spam levels shot up. I think this was due to a minor advance in spambot technology: They're probably saving the questions and answers somewhere, so once one of their people beats the question, then 100% of the spam flood gets past the captcha.
- The current (temporary) solution is that I've made account creation impossible. All our spam comes from accounts rather than IP adresses, so we're getting zero spam. Preventing account creating was actually an accidental side-effect of playing with MediaWiki:Titleblacklist. I don't want account creating to be blocked, but I was very busy and this stopped the spam, so it has stayed that way until now.
- Solution 1: Change the captcha questions frequently. Currently I have to ask the sysadmins to modify LocalSettings.php each time I want to change the questions, but I don't like bothering them. To fix this I could ask them to install QuestyPage (which also requires Lockdown or NamespaceReadRestrict but I don't know which is better).
- Solution 2: Replace the current login page with something that the spambots don't know how to use. For example, we could set up a single sign-on system so that people can make an account which works on the wiki and on the main wordpress site. This would only address the spam from accounts, but since that's currently where 100% of our spam comes from, this will be enough (for now at least).
- Or maybe we'll do both. If you've any suggestions, I'm all ears :) Ciaran (talk) 17:52, 14 May 2015 (EDT)
- Thanks. It would probably work very well. The spambot networks put a lot of effort into automatically beating the widely-used captcha systems, but they might never have looked at Simplecaptcha. ...but, to keep upgrades simple, my preference is for solutions that require as few new packages as possible or which are widely used in the Mediawiki community. But I'll keep Simplecaptcha as a backup idea. Ciaran (talk) 06:17, 19 May 2015 (EDT)