FSFE Fellowship Blogs weblog
August 10th, 2014
There was 3-in-1 event in Moscow a week ago: PGP keysigning party, cryptoparty and installfest, called Crypto InstallFest, organized by Russian’s Pirate Party. Several dozens of people have come. There were various speeches and lectures, video link and chatting with Runa Sandvik (she is involved in TorProject.org) and workshops at last. Someone was helped with Ubuntu installing, someone with PGP (GnuPG of course). Also there were many discussions about cryptocurrencies and Bitcoin (I can not call it cryptocurrency). There are some dicussions and photographs in social networks: Vkontakte, Facebook.
July 30th, 2014
Several days ago I decided to make an alternative for OpenVPN: GoVPN. OpenVPN uses rather slow HMAC for message authentication and no zero-knowledge password authenticated key exchanges. He is pretty simple, but with not so high security margin and performance.
I wrote already working (but of course with possibly many bugs) daemon on Go programming language. It uses one of the faster crypto algorithms available today and achieves zero-knowledge mutual pre-shared key authenticated key exchange. All derived keys are per-session, so even if PSK is compromised, there is no way to decrypt captured traffic (perfect forward secrecy property).
It does neither interface nor IP-address and routing management: it is the task of underlying OS facilities. And currently it can work with only single client. But I am planning to fix that: so it can be used with many clients simultaneously. Moreover secure remote password can be better choice to allow humans use memorable passwords instead of 256bit keys.
I think that the main comparative advantage is small code size, that can be easily analyzed, audited and fixed. From technical point of overview: it uses Salsa20, Poly1305, Curve25519 and DH-EKE with PSK.
May 11th, 2014
Go programming language seems to be very interesting for me. I used to program several years on Perl, Lua and last time on Python. Go is like C, but with convenient necessary features I wished for. I have never coded on it before (except two half-screen sized functions) and this weekend is the first time I decided to write something useful.
As XMPP organization killed global Jabber network (requiring TLSed interserver connections), I looked for good global chatting solution. (Un?)fortunately only IRC protocols seems to be simple enough and there are clients for every available platform I presume. Big global IRC networks are not protocol compatible between themselves. Existing IRC daemons are not so easy and quick to setup. Anyway even in XMPP world there are islands of separated servers, so even if my server can not communicate with other ones — so be it.
I used miniircd IRC daemon: it is written on Python, does not have configuration files and satisfies all my IRC needs. But as I decided to try Go: I rewrote it. Single executable binary and pretty fast working daemon with ability to save logs and channel states. https://github.com/stargrave/goircd
I still did not cover all it’s commands with unittests (however I began with TDD development principle, I wished to chat with my friends stronger), so probably there should be bugs. And as I do not have any experience in that language: bugs have to exists. Currently it works pretty well for private/personal use. You can easily create TLS connections with crywrap utility. And of course it is free software!
November 6th, 2013
There was rather big cryptoparty in Moscow several days ago. As for me: it went rather good, because there were people satisfied with software they used (not convinced before) (proven enough TrueCrypt, GnuPG and Pidgin’s OTR plugin) and with newly installed proven enough dm-crypt/TrueCrypt, GnuPG and Pidgin’s OTR pluging privacy saving software. Not only privacy related issues were talked about, but cryptoanarchism related too. Some kind of theoretical lecture, software installation workshop lasted for nearly four hours. After that there was rave music afterparty. Many kudos to organizers (perfect but good enough for the first time for so big audience) and people interested in their own privacy and anonymity issues. Hope most of them enjoyed it and will visit the future ones.
Here is my more descriptive opinion and answers to some critique, unfortunately on russian.
Хотел бы высказать своё субъективное мнение о том как недавно прошла криптовечеринка в Москве (хотя я конечно не нейтральная сторона, но подготовкой занимался в меньшей степени нежели чем другие). В сторонних блогах в основном негативные отзывы типа “хуже некуда”, “полный fail” и тому подобное. И я скорее полностью не согласен с этим. Возможно люди просто преувеличивали, или слишком многого и невозможного хотели в противном случае.
Какие же назывались провалы:
- вступительные речи с “мы сами будем искать и наказывать преступников”. Да, это безусловно цензура, это проявление чего-то авторитарного, против чего как-раз таки шифропанки и борются. Однако даже я (а я умею придираться) решил плюнуть на это, ведь понятно что хотели передать факт того что шифропанки всё своё делают не для того чтобы покрывать преступников, им тоже всякие гады противны, но любая технология будет всегда нести как добро, так и зло. Злоумышленники (которые хотят лишить нас приватности) всегда будут показывать только плохую сторону.
- вступительная речь с “facebook, email уже более не безопасны” тоже конечно призывает вопрос “а когда они были и кто считал что они когда-то такими были?”, но, опять же, лично я решил не придираться и вполне поверю что большинство не технарей серьёзно были уверены в обратном и Сноуден их наверное шокировал, хотя он просто подтвердил факт что да — никаких предположений, за вами следят и прослушивают
- гораздо более серьёзно это то что демонстрировалось проприетарное закрытое ПО (Microsoft Windows, Adobe Reader) и использовались сервисы абсолютно неуважающие приватность (Google+) для видеотрансляции и организация проводилась в Facebook. Это бесспорно неправильно, но Михаил не раз отметил для пришедших что надо использовать только и только открытое и свободное ПО по возможности. Сразу спрыгнуть со своих MacOsX и iOS на FreeBSD или GNU/Linux не каждый может. Лично я попросил прощения за демонстрацию с Adobe-ом, но она не нарушала ничью приватность и чтобы людей не задерживать ещё больше и мне не заниматься доставанием своего правильного свободного компьютера — решили сэкономить время. Хотя позволить себе создавать презентацию вне LaTeX/beamer я не мог. Но делалось же это всё в попыхах и впервые и чтобы уж наверняка: решили немного отойти от правильного шифропанковского ПО и сервисов на первый раз
- не отрепетированное вступление. Ну да, было такое. Было даже забавно. Первый блин комом, намотали на ус. Это всего лишь пять минут попытки вклинить нечто что привлечёт больше людей, нежели чем только технарей пришедших пообсуждать BitCoin-ы
Во-первых у нас не Германия, где люди реально на полном серьёзе обеспокоены своей приватность. У них, как говорят, реально наказывают за нелегальное использование BitTorrent-а. У нас даже сложно представить как это и у нас люди особо то и разницы между свободным и свободнораспространяемым ПО не знают. То есть у нас не знают цену приватности (так как её ещё не теряли в таком объёме) и менее образованы. Устроить в зале массированное обсуждение за и против сервиса Cryptocat у нас ещё банально рано.
Во-вторых организаторы очень правильно и не раз подчеркнули что целевая аудитория это, грубо говоря, домохозяйки с Интернетом, которым и надо показать возникающие опасности и как их можно побороть на пальцах. Михаил во время workshop-а, как мне показалось, правильные давал рекомендации, показывал разносторонний софт (почему OTR лучше других и какие у него недостатки, альтернативы проприетарным облакам), демонстрировал воочию на практике работу SMP, призывал кооперировать друг с другом. Ко мне подходили люди и интересовались силой, актуальностью, приемлимостью TrueCrypt, PGP и GNUnet софта. Устроить массовое цокание по клавиатурам технически было невозможно, так как меньшинство пришло со своими компьютерами и целью посещения криптопати было явно не обустроить свои инструменты и средства связи шифропанковским софтом.
В-третьих организаторы решили устроить нечто большее чем загнать людей в помещение с проектором и микрофоном и изнасиловать их технику чтобы АНБ/ФСБ не добралась до неё. Вместо темы только шифропанков затронута и криптоанархия, в общем-то не упоминающейся в других странах. Создана атмосфера, музыкальное сопровождение, попытка представления. Устроили afterparty rave, но не могу его оценить так как поклонник куда более тяжёлой и экстремальной музыки.
Без них не было бы ничего, особо бы никто не почесался. Первый блин комом, но все учли ошибки. Как и первый сексуальный опыт — он может показаться не совсем тем что ожидалось. Но самое то главное: оттуда вышли люди с установленным софтом, вышли люди понимающие необходимость протоколов социалистов миллионеров, вышли люди переживавшие за юзабельность TrueCrypt и реализаций PGP? Значит бесследно или впустую не прошло. Можно (и нужно) лучше, но для первого раза более чем отлично.
June 5th, 2013
I was FreeBSD user for six years and worked with it’s versions from 5.0 to 7.0. There appeared to much work with GNU/Linux related subsystems exclusively and it was easier for me to switch yet another UNIX-like operating system temporarily.
I tried several distributions but stayed exactly on Debian. My requirements were:
- mature, stable and reliable system without any bleeding edge software. I do not worry that there is no latest version of Firefox for example. Included in stable Debian’s distribution one fully satisfies me. Maybe it is not so fast as can be, but it is mature and working.
- less or more permanent distributions overall architecture without any sudden surprises after yet another packages upgrade. Of course sometimes it can not be skipped, but serious changes are always must be in a major software/distribution version that is rather seldom event.
- big collection and wide availability of various software. Debian has one of the biggest packages collection. And all of their binary compiled versions can be easily installed using single command. Of course you must trust it’s maintainers. I trust and rely on them.
- it’s basic installation should not have anything that I am going to remove as a first step. Just minimal bunch of tools and daemons. Ubuntu for example does not provide that: I have to remove huge piles of GNOME-related things and only then install my preferred ones.
Debian even now is the single distribution that can fit in those requirements. But several weeks ago I was very disappointed hearing that most part of it’s developers support integration with systemd.
You see, modern GNU/Linux-es are not a UNIX-like OS with UNIX-way hackerish concepts anymore. UNIX-es in my opinion always were very beautiful and smart programmers creations with really very elegant tasks solving. Most GNU/Linux-es lost that property.
Several decades there were quite few interprocess communication choices. Most time it is either plain text or, unfortunately, binary data floating between conveyors, pipes, domain or network sockets. Each daemon representing any subsystem can be less or more uniquely determined by socket path of pair of network address and port. In nearly all cases it can satisfy anybody.
Even at the very early days of UNIX systems hackers preferred plain text and similar driven protocols and file formats. Though rather relatively big SMTP responses are not as good as binary ones could be, exceptionally on that time slow links, hackers preferred human readable choices anyway, because they are simple, easy to debug, easy to maintain and easy to use.
But GNU/Linux does not like idea of beauty clever decisions and long time proven software. It’s developers (I can not call them hackers in most cases anymore) have to invent the wheel again and create yet another incompatible solution like several IPCs before and DBus itself. It requires heavy dependencies, it does not use well known socket-like paths and addresses, it uses unreadable binary protocol, it is slow and does neither guarantee any delivery nor has any buffering queue.
Access to various low level hardware devices used simple device node filesystem-like access. Of course many of them dictate standards existence and audio has one: Open Sound System, represented by entries inside /dev. Easy to use, easy to implement proven and mature system. If you want to stream audio data other the network you can easily use UNIX power to connect it for example with either pipe or network socket.
GNU/Linux folks do not understand that elegant solution and invent ALSA, aRts, ESD, NAS and PulseAudio at last. So many reinvented creations for rather simple thing. Of course OSS is not the right solution if you have to mix various sound inputs and outputs of both hardware and software modules. But JACK does this job pretty well. GNU/Linux developers do not think so again.
What about operating system’s initialization part? You have various daemons that should be started and controlled. You have to do various file system related steps, manage process execution somehow. All that tasks are done for a long time using shell interpreter, intended to solve them. As a fact each daemon has small shell script used to control server’s behaviour. Hackers need to glue those daemons together. For me, it seems to be very elegant solution to include trivial plain-text metainformation as script’s comments and to create symbolic links dependent on that metainfo with number included to force sorting done right, as in System V.
UNIX-way is to have many small tools, where each of them does single job, but does it well. Simple separate initialization system, simple separate logging system, simple separate shell interpreters, simple IPC socket-oriented libraries, simple daemons, cron, inetd and so on. Looks simple, clear and nice.
You are wrong! Modern GNU/Linux-es can not accept that, because they are missing written on compiled language (does not depend on already existing software for controlling process flows (shells)) program, with own IPC dependency, with own declarative language bloated combine of initialization, logging, cron/at-ing, inetd-ing and DBus/socket listening systems at once. Wait, systemd is pretty modular: several dozens of separate executable. Hackerish SysV is just a shell interpreter with several shell-scripts. Thirty years ago logs have been written on rather small hard drives in plain text, but today seems that hard drives became much smaller and more expensive and systemd decided to write human unreadable and unprocessable with any kind of sed/awk/shell/perl tools binary logs.
I still do not understand why GNOME and derivative distributions (I am sure that udev, systemd, dbus and GNOME are single aggregate) does not use very simple mailcap-files to decide what to do with various kinds of data. mailcap contains plain text lines with data content type and shell script code saying what program you need to run and apply to data. Just find the line by it’s content type and execute related command line. This can be done with single sed call. Just simple plain text file to rule all user’s software preferences. GNOME has to prerun software that will register itself on DBus (should be already running), then another software must create proper message, send it over DBus hoping that someone with catch it doing probably what user wants. It is awful.
And at last I see in Debian maillists that they are going to remove local sendmail server. I see what is happening: when systems are created by very clever hackers — they are very cool for educated technicians and other hackers. When ordinary labour crowd is falling in this world: it will be ruined. Usenet was destroyed like that. Email etiquette has mostly disappeared and replaced by top-posted huge quoted HTML messages, after user-friendly email clients born.
Security is not compatible with user-friendliness. Simple clever hacks are not compatible with classical user’s world of view. Developers never speaks users on the same language. There is always separation of developer-friendly and user-friendly. They can not coexists together, as like servers are pretty different from desktops.
Current Debian is very developer and server friendly system, while Ubuntu is aimed to be user-friendly. Systemd is great for desktop requirements, so let’s integrate it to desktop system. Why one is going to replace cron/at, SysV/rc, inetd, sockets, syslog, devnodes with single all-in-one bloated monolithic combine and remove sendmail? What will stay from UNIX itself? Arch Linux is going to mess /bin and /sbin with /usr/bin. So I won’t even find /bin/sh in that OS. It is not UNIX-like system anymore. It is yet another unmaintainable crap of compiled monolithic POSIX-compatible (hope so) code.
Of course there are really true hackerish UNIX-like GNU/Linux distributions, but all known ones require much manual work with them. Free software *BSD does not, as it has cool port collections and well maintained high quality overall system’s design (not a pile of absolutely different software pieces).
March 31st, 2013
At last I realized that zsh shell is really much more useful and better to use than either tcsh or far much than bash. Many shells provide different very cool features, that looks like a killing feature, but in most cases, for me, all of them are in seldom use. tcsh has plenty history substitution options, as bash has large quantity of parameter expansion techniques. But I hardly use even a small piece of them. Of course they can greatly reduce overall character input count, but are too bloated to remember.
One of the most often mentioned zsh‘s feature is it’s commands completion. It is convenient possibility to use fancy menus to either select process you are going to kill, or git’s subcommand to execute. Or maybe just to choose corresponding directory or filename. Well, sometimes, in my opinion, this can be pretty useful, but character count during those menus exploration in most cases, visual analyzing of all those entries leads to too high interaction (human with computer) delay and I will enter two more characters of filename and complete it with Tab faster. Entering filename’s part and hitting Tab is one context, but looking for necessary entry is uncomparable another one. Context switching is an expensive operation.
Moreover all those completions can be very relaxating: you will forget your files hierarchy, forget what options does command have, forget what targets exist in your Makefile, forget how to easily automate PID saving and killing by it, forget how to either make cool shell aliases or write yet another extra useful small Perl-script. Of course there is no distinct border of those unskilled relaxation: if there is file “foo” and “bar”, then obviously there is no need to force hacker typing it’s fullname. Remote SSH directories transparent observation and completion is another undoubtedly useful feature. But all of those completions exist in other shells except zsh.
Anyway there are some killer features that made me zsh hard fan and currently no word about switching back to either tcsh or bash. Here they are:
- multiline editing capabilities are extremely useful without creating many temporary one-time separate shell-scripts. And of course you can easily edit them inside external text editor.
**/*-like various path expansion that saved me from huge quantity of find occurrences and together with
*(.)-like things your will forget about it in most cases at all. I had several aliases and external shell-scripts, all calling find, but now throw them out.
- command spellchecking — tcsh already had this feature, but bash did not. With high speed typing, error rate is pretty high too and this feature can save much time and nerves.
- autopushd possibility — each cd to directory acts like pushd and you can easily travel back like in your browser. This feature is particularly useful together with Z plugin.
- autocd option is also presented in bash. It is under a big questions of usefulness, as ambiguity may appear too often with it. With this option you can suppress cd before directory name and shell automatically understands that you are going to change it.
- filename extension related aliases that saves a lot of time from entering zathura to view PDF files, sxiv for images and so on.
- it is much faster than bash. Without turned off unused extensions it starts faster, runs faster, completes faster. Each dozen of milliseconds are nice to spend not awaiting for many-bogomips powerful computer.
And there is separate killer must have plugin that I met firstly when using bash (it also works with zsh of course): Z directory jumper. It tracks each time you change directory and offers quick jumping to previously visited place identified by regular expression and directory visiting frequencies. And it works perfectly with autocd and autopushd.
zmv feature looks very promising and seems that it will replace another bunch of Perl/shell-scripts from my computer. However it requires some learning curve of course.
March 31st, 2013
There are plenty of reasons why I still do not understand why people like Gmail’s webmail so much and are ready to replace powerful desktop email clients.
No threaded conversation view
There is no threaded conversation view. What a hell!? Even simple mailx under every modern GNU/Linux distribution can display messages in threaded view. Email nature itself, it’s In-Reply-To, References and similar fields assume that any conversation can be a graph like, not linear. Of course you can create really linear thread, where every next message is In-Reply-To the very first threaded starting. But Gmail does not do that and In-Replying-To the message you are really currently answering.
I think that root of that “problem” lies only within huge quantity of cyberspace newcomers, that are not capable of serious long-term discussion with many new questions arising during it and that have to be separated from the main thread. Those people mostly have never met situation where technical discussion may lead to completely different and more important various subjects. Most of them tend to mix unrelated discussion subjects in single letter.
It is just simple lack of discussion experience. Of course either instant messaging or IRC chatrooms have nothing similar and they offer just a raw one huge heap of short messages. It is chatting, nothing more and that behaviour is acceptable there. But email was born as a powerful tool of convenient conversation without big piles of weakly referenced messages with low total overall signal-to-noise ratio. For over several dozens of years this email nature have never been seriously changed, because it is really cleaver and fine. So many hackers can not be wrong.
Google turned it to yet another instant messaging system. It does not use specific protocols like XMPP for that, but SMTP one. It destroys all graph-like structures by their linearization. Newcomers just can not learn how to deal with real serious discussions (like hackers do), because they do not have an instruments for that.
Ignoring of In-Reply-To header fields
Gmail does not look for In-Reply-To messages fields to determine if it is starting of a new thread. If you change message’s subject (just to clarification for example), then it will be recognized as a new thread, however you clearly tell in In-Reply-To field that this is reply to exactly that message of the same thread, not another one. Long-term discussion thread with several branches is divided to unrelated isolated linear conversations, however headers did not allow this behaviour.
Lack of headers control
Moreover you can not control your own message headers. You can not forcefully split thread (removing In-Reply-To and keeping subject same). You can not join mistakenly dismembered thread. You can not use some mailing list management software that is controlled through message headers fields. You can not add just simple informational fields, for example just to mention your your PGP public key’s availability.
Lack of any maillist related functionality
As mentioned above, there are no “thread split” and “threads join” actions. Also there is no “reply to list” buttons. Just only trivial “reply” and “reply to all”. This is the reason why people send messages twice to one recipient so often (duplicating it through already including maillist and by CCing him simultaneously). Gmail have no ability to show either are you subscribed to a maillist and there is no need in separate reply message intended for you, or you are not subscribed and should be CCed. It does not support Mail-Followup-To header. You can not add it, because you are not allowed to edit headers. You can not take it in action non manually, because there is no “list reply” functionality, again.
There is no such powerful thing as message scoring. But many desktop email clients lack this feature too. You can specify various rules that alter message score. You could sort all correspondence by this score, leaving trivial notifications at the bottom and moving to thrash after reading. You could easily wipe unneeded messages, could easily find important things. Gmail’s mailboxes are just a huge heaps of equally scored letters.
Supporting obsolete RFC standards
Gmail encodes attachment’s filenames in obsolete RFC2047, instead of modern and everywhere supported RFC2231. I said everywhere? Of course with exceptions: Microsoft Outlook products will never be on a technologies bleeding edge. Gmail seems to be on the same way.
Gmail does not allow you to send Windows executable binaries of any kind in any form even when using it without webmail interface. Even if you will change attachment’s Content-Type, filename’s extension, put it in archive, compress that archive. They say that this is because of security. Why are they making decisions of that will be secure for me? Who the hell they are? It is my choice, my problems and my responsibility to send my own compiled binaries to someone awaiting for them.
- Unability to forward message as valid correct RFC822 email attachment, keeping all headers and attachments as is.
- You do not know how spam-filtering works and you can not adjust and control that blackbox.
- No regular expressions support or complex search queries (comparing for example with Mutt). But I agree that this is seldom actions for most of us.
- No way to either specify or override attachment’s MIME type.
- You can not send just single file/attachment. It always will consist of an empty text part and an attachment itself.
- No possibility to turn of line wrapping, for example either to insert patch/preformatted text contents as is or draw ASCII schemes.
And of course an obvious lack of PGP de-facto standard for privacy greatly reduces all potential usage possibilities of that webmail.
November 15th, 2009
There are many “theoretical” talks about how free software can be used commercially, that it can greatly stimulate business activity and so on. There are very few real life examples of that. And most of them, as I can see, firstly had just common classical proprietary model of software development and only later some of them either freed their products or at least opened. As I can understand, only after fear of competition had gone they tried to made timid steps to open-source (as nearly none of them really understand difference between open-source and free software (as most of users too)) just to seem good and king in society’s eyes.
Now I want to tell you some kind of so-called success story of one company (where I work nowadays): company that chose freedom path as a base for software development. Actually it does not specialize itself on software, but on high-performance server solutions and storage systems manufacturing.
The first step that is troublesome
In all innocence first time we met need to develop own software — it was as common classical proprietary non-free closed-source product. It was some kind of firmware for brandmauer/router based on free software project — m0n0wall, licenced under 2-clause BSD. This licence allows (being not copyleft) one to make proprietary derivate works — that was crucial for us.
There were many features added to it (good thing there weren’t either serious security issues or bugs), but because of our fear to “shine” with it we decided not to communicate with foreign developers anyhow. Also there were licence’s ambiguities with remaining different included software.
And what is the result? Of course we gained some money from selling it, but not because of users willing to buy exactly it, rather because there was not any acceptable choice for them: cheap server meant to be brandmauer with plenty of useful abilities is sold only including our proprietary software.
During high quality granted server’s manufacturing we have to test all hardware components separately and all of them together in the whole system. Besides there must be firmware upgrade process (motherboard BIOS’es, BMC’s, hardware RAID-controller’s firmware, etc) and operating system installation automatization. All of these is needed to remove human factor as much as we can and to complete orders in time.
So we needed very complicated all-time progressing hardware testing system. There appeared Inquisitor software project, actually with roots going much deeper in time. Decision about it’s freeness was taken without a peep.
What benefits we got? Let’s look:
- There was no need to pass over copylefted software used in it, to think much about “defending” from ones eyes it’s source code and so on. Only about licence compatibility, but that is another question.
- We actively collaborated with different foreign free software projects related to our system. All community benefits. Willing or not we were software testers also, as many software provided needed features only in non-stable versions.
- Our subproject — Einarc was helped much by totally independent from us people. You know, no one can have great quantity of different RAID-controllers and enough time to “play” with them.
- We avoided possible unethical situations when someone will steal source code to use in own creations. Copyleft protects our freedom and possible losing courts.
- Money? We did not loose anything even if the project will be closed. In most cases one will hire our team to configure and install this complex system to fit employer’s requirements. Someone can say, that raw source code is useless without corresponding team.
What disadvantages has Inquisitor being free software? None!
But as we all know there appeared world economic crisis. There, in Russia, it should be destructive for high technology fields, as as a rule they cost too much to afford. Actually it was so of course.
That time we thought about how can we lower our expenses. As software developers, we decided to throw out proprietary very expensive network attached storage’s (NAS) software and to replace it with cheaper or priceless (freeware) one.
Moreover, there were other disadvantages in those proprietary NAS products:
- We had our hands tired to be able to modify these software to better fit with our servers. To make it’s performance higher at least.
- User can only use those features that already were built-in — there was no way to advance them, remove or add another ones.
We can not sell NAS-related storage server without that software and no user will buy it, but both of us have to pay for it very high price. We tried to find replacement for it: there were enough very different free and open-source software solutions, but none of them satisfied us (only technical reasons). We decided to write own one and of course release it as free software.
We have got what we wanted and even more:
- We do not pay for each copy (or even terabyte, quantity of connections or users, and so on, as proprietary men do) or of NAS software.
- We achieved wider range of hardware RAID-controllers support using already known Einarc utility.
- We can lower expenses more by replacing these proprietary incompatible hardware RAID-controller by well-known proven and mature software RAID solutions.
- We greatly helped (actually driven by ourselves too) Einarc project and as a result Inquisitor platform too.
User has got also several benefits from all of this:
- Lowered cost on storage servers.
- User is independent from RAID-controller vendor by using software based arrays.
- We can modify NAS software as user wants.
- Possible beginning of this shared software usage by other companies will lead to increasing competition on this field for cheaper and higher quality solutions.
We are satisfied, consumers satisfied — can it be true that everything is fine? Of course no: those proprietary software producing company, that do not care about user’s freedom, concealing everything it can, forcing everyone do what it orders without sidesteps, willing only to retrieve everything from others pockets, is not satisfied at all. It is unethical and immoral to be like these companies and one
can not consider them as an alternative.
As I tried to show you, free software really and successfully can be used in commercial. Of course it is not so easy, but whole humanity benefits from it except money-willing individuals. And even during economic crisis it can help to survive on the market.
Sergey Matveev (software developer at ETegro Technologies)
September 5th, 2009
Nearly a week ago I discovered that all BitTorrent downloads from PirateBay did not work. Everything seemed fine – PirateBay website worked perfectly, torrents can be downloaded without any problems. But pings to PirateBay’s tracker did not work at all. DNS gave correct results, but packets were dropped. Using traceroute I understood that my ISP was dropping them – packets did not try to exit even to M-IX (biggest Moscow Internet Exchange).
From co-workers I discovered that European “big” and “important” men were going to punish every european ISP that will provide access to PirateBay (tracker). I checked half a dozen of other Moscow ISPs and they were dropping everything going to PirateBay’s tracker too.
I thought – “What a hell are they doing?”. I feeled myself like poor sheep among wolves. I pay them (not a low price) for real Internet access – not for a pack of services they like and decide to make available.
Guys from PirateBay are clever: one of them opened a simple pure BitTorrent tracker (OpenBitTorrent) and they added it to all torrents as an alternative. I switched it in my BitTorrent client and everything began to work fine again.
But that forced me to think about what will be if someone “important” (of course this “important” and “big” men are nothing more than a simple moneylovers) will found “enough” arguments to close even legally clean (IMHO) OpenBitTorrent. Of course there will appear yet another tracker, and another and so on – but it is completely unnormal: rich men dictate us, what we can use, download, watch and so on.
Is there anything that can protect us, protect our privacy, give freedom at least in Internet? I know about Tor onion routing existence – I run router all days long giving away all available bandwidth. But it can not help protecting torrent-index sites (such as PirateBay), can not protect Tor’s exit nodes. There are powerful lawyers group that are ready to protect exit nodes’es rights and so on – but I am not sure if they can do something in countries like Russia or anything else with their rotten law structure. And even if all of them will lead to successful courts – it will take really much time for a single judge process. Time is expensive. And except it: many people related to law will work with a lame, foolish, totally dependent on money things – unneeded society layer of people, waste of time and money, junk.
I thought that a possible solution can be: running BitTorrent tracker, torrent-indexing website as a Tor’s hidden service and forcing all clients to use SSL. That will fully hide BitTorrent server-side and will make inability to understand what each client is doing.
But… is there any more beautiful solution. And solution that can prevent the single known real possible attack on Tor’s network – traffic and network analysis. If we have got one hundred computers and no traffic among them at all, and several minutes later we discovered new Hollywood blockbuster torrent on a hidden torrent-index website, then we can understand possible server’s location through heavy network analysis. Or maybe possible leechers of course. It will gave only a prediction of target to police to check user computer’s contents. I think that it is not enough to activate police forces, but it is possible, because all of them love money and will do any dirty job for them.
After some searching, I discovered several network systems such as Freenet, Mixminion and GNUnet. From technical, privacy and anonymity point of view – GNUnet is the best choice between them. It protects content-uploaders (anonymity), content-retrievers, searches, search results and even network activity (permanent traffic load with an encryption) and makes strong protection from spying (inclusion of “bad”, “rich men”‘s nodes into network).
Do not understand me incorrectly: it is not an advertisement or some kind of it of GNUnet, but rather mine wish to share excitements and feelings about it.
Building of latest source code on my MIPS-based notebook finished without any problems. Configuration for single daemon is very simple. GNUnet has classical true UNIX-way command line utilities to work with: one for searching – simply just enter search keywords, one for downloading – just enter an ECRS path, one for publishing content – it is rather simple too. Of course it is not full list, but it is basic tools for fully anonymous, without any censorship, saving your privacy sharing.
GNUnet can use not only ordinary UDP and TCP transport protocols, but also HTTP (with ability of proxying) and even SMTP one.
I read a lot about GNUnet and disappointments about it: searching goes too long, downloading too. I decided to share several gigabytes of content and to try my friend search it and download. I expected much more lower download rates and bigger search time, but everything was too fine: only a half a minute or maybe a minute for searching and about 10-20 KiB/sec download speed after the very beginning of it. Possibility to “thread” downloading (opposite to Tor – only single TCP connection) from several resources and swarming after that. So, theoretically it can be as fast as BitTorrent.
I want to show people that ISPs are too dependent on “big” rich men dictating all rules of their behavior. We are suffering from it, but we can prevent it. The already made solution for file sharing with full anonymity, privacy and without damned censorship exists. GNUnet is more than a file sharing system it can be base for many other services: SMTP, HTTP, IRC, VPN (AFAIK) already can be run on it. Also, there is a Tor system, but sometimes it is not enough.
We can stand against rich men and we can save out freedom. All the tools needed for it exists and they are working: not theoretically, but practically. The main problem with GNUnet is only a too small number of people using it – so let’s share!
March 22nd, 2009
Studying at Moscow Aviation Institute I meet with different lecturers, different sciences and of course different software. As most of lecturers are quite an old men, some of them do not know computers and computer’s software and some of them know only that, what were introduced in Russia in the middle of 90s. Where PCs became accessible for people. Of course nearly all of this computers run Microsoft’s operating systems, such as Windows or DOS. Of course people did not know anything about free software, “alternative” OSes and even Windowses quite good. And of course nearly everything was stolen, was free (priceless). The bigger part of engineering and scientific programs in our institutes was created on that PCs with illegal Windowses.
Today, great quantity of works, tasks, etc for students are in need of computer and in need of those old programs. Many of them are lecturer’s property – their own creations. And everything of this can cause many problems for a man trying to use, support only free software. Some lecturers give their programs with all corresponding source code (as a rule it is Fortran programming language). The other part of them gives only binary/object code and deny giving it’s source code, as, as they say, it is very hard to understand and is very important and expensive intellectual property. And the other one uses scripts for MatLab, MathCad and force to use this products. Also, many of them demand to use Microsoft Office Word’s format for all reports, Excel for building graphs and AutoCAD for drawings.
It is very hard to study in such environment. Most works I can complete using free GNU Octave instead of MatLab, Maxima or SAGE instead of MathCad, Gnuplot for building graphs and QCad instead of AutoCAD and assure teachers that there is no need in proprietary, expensive, unreliable software and I can successfully use the free one instead. I can say that I have got no money to purchase most of this software. They can give it to me, but nearly all of it requires Microsoft Windows to work. Problems can appear even when teachers give task itself… again in closed proprietary Word or MathCad format. They can refuse to talk with me, because I deny using of proprietary software on my computers, I can not afford it, I do not trust it and in best case I forced to run it in virtual machine or separate computers, because I have got valuable documents and information on my PC. And currently I am not talking about the ethical and social aspect of such doings: only about price, safety of my information, compatibility with other software, legal use of it (I do not want to be offender).
Instead of learning, much deeper diving in science, I have to listen how to do something in expensive non-free software and simultaneously learn the free alternative of it at home (actually I like it, but it takes much time). And then, I must prove that work’s results are correct and full. And, by the way, I have to take notebook and show my work on it (if paper printings are not enough) – as teachers will deny installing the free software, they do not know and do not want to know. And they can not accept that student can use the really hard and serious programs (OpenFOAM for example for aerohydrodynamics computing). And in other case I have got big problems with the teacher and have to show my work to the other one.
But hopefully something becomes good after all this stories. People, seeing those rejections, hard positions, begin to think about the cause, they begin to answer why are you doing so, what is the problem. They discover the free software existence and it’s benefits. People begin to think, think about their future. They are not calculation mechanisms anymore, they are thinking creations – people – from the lecturer’s view. And after several years of studying I see progress – recommendations and offers (by teachers) to use OpenOffice, Maxima and LaTeX for example.
PS: Today I discovered DreamSpark project of Microsoft. It’s aim is to freely give different software (including Windows OS itself) to students. It is terrible. They are openly forcing students (and future workers, users of proprietary software) anyhow to use their products. Also they force students to register in their Windows Live service. On their site I saw many student’s competitions. I (as a student) was very interested in that, interested in competition’s reports. But… everything is available only in OpenXML-based Office formats. Looking on other links I found student’s “success” stories. There and there (in russian). I can not believe in such delirium, I can not believe are there really exist people who can believe in them? Are Microsoft Office’s users/student are so silly? I was shocked.