planet.opensource.dk

Udgiv indhold
Planet OpenSource DK - http://planet.opensource.dk/
Opdateret: 11 uger 4 dage siden

Peter Toft: Facebook må udfases! Ind med ... Ello?

september 25, 2014 - 23:08
Jeg står (sikkert ligesom mange af jer) i et dilemma. Facebook er gigantisk stort og alle venner er der, men jeg hader at Facebook med hård hånd styrer hvad jeg skal se. Det er ikke seneste opslag eller andet logisk, men i stedet anvender Facebook nogle skumle filtre, som skjuler nogle ting og p...

Peter Larsen: Twofaced virkelighed med difo/dk-hostmaster

september 25, 2014 - 20:46

Vi er en nation som har meget IT viden, burde vide bedre og burde have styr på det hele. Det har vi ikke, interessen i offentligheden blandt it interesseret er pt på csc, men vores toplevel domæne er en katastrofe politisk set.

Det triste er, at dagens seance hos dk-hostmaster var så meget flashback at man sad lidt og savnede Per Kølles hawaii skjorte som halvåbent dansede foran en mens han monotomt læste vores registrator kontrakt op, eller skældte os ud og satte spørgsmålstegn ved om vi var troværdige. Det havde måske været en bedre løsning at havde hyret Per Kølle til at styre dette “orienteringsmøde”, for så havde vores forventninger allerede været afstemt med indholdet.

I en næsten helt tragikomisk opførelse af svanesøen har vi set dk-hostmaster/difo danse os i møde da de var ude efter forlængelsen af deres bevilling i foråret for at beholde .dk, der var ikke det som ikke kunne lade sig gøre, epp renew komando, pre act online i morgen, hvertfald ikke noget nemid og sol hver lørdag og søndag, og nu når de har fået den godkendt og lavet et ansigtskifte i stil med Jekyll and Mr Hyde, til et møde med forudantaget beslutninger og uden vilje til at høre efter kyndige registrator kollegaer og undertegnet som desperat forsøger at få banket bare lidt viden om vores forretning og hvad kunder (registranterne – DIG!) har brug for ind i hovedet på dem.

I sandheden twofaced optræden, skuffende, og en trist fremtid for .dk

NemID… NemID…. mmmm smager allerede godt i munden, ikk? Det er en af de ting du skal bruge når du som dansker fremover skal have et .dk domæne.. For du og din adresse skal CPR verificeres, og ultimativt NemID verificeres, jeg sagde det kom, jeg sagde det lang tid i forvejen, i ville ikke lytte, nu kommer det.

Og det skal ske før domænet bliver reserveret, ingen validering, ingen registrering. “fordi det er jo bedst for registranterne at de ikke tror de kan få det, hvis de ikke kan valideres”.

Fordi det er bedre for registranterne, fordi det er et krav i loven.
Samtidig har dk-hostmaster intet gjort for at hente støtte fra registratorerne for at kæmpe mod “loven”, måske fordi i virkeligheden vil dk-hostmaster GERNE dette, og i virkeligheden vil man GERNE have cpr validering og NemID, fordi det er jo bedst for registranterne (og super for jobsikring, og super for at holde evt andre registry providers ude, og super for at holde os fast i en teknisk elendig løsning).

Og det kommer inden marts 2015.. En helt vild plan for nogen som stadig, efter eget udsagn er i “planlægningsfasen”, vi skal have den helt store verificering igang, og hvis du ikke verificere så fejler din registrering, naturligvis. Vi var jo til møde for at give input og ikke blot for at “På mødet vil vi gennemgå hovedelementerne i både de valgte tekniske løsninger og de juridiske ændringer.“, altså, for dem som ikke forstår teksten, er den fra indkaldelsen til mødet.. Det virker som et oplæg til debat? Jada?

Når jeg kalder det en “vild plan”, så er det fordi at samtidig med at vi har holdt to årlige registrator møder, er blevet pebet ørene fulde med at preact var klar, og online snart.. i godt 3 år.. og som har været online nu i 2ish måneder, men stadig er ubrugeligt (så betegnes det som online? når det er teknisk set umuligt at bruge det?) Og preact er alligevel bare en udveksling af info med et token, ingen kontakt til cpr og ingen kontakt med nemid, og nu skal dk-hostmaster så lave den del på 6 måneder?

Det lyder fandme vildt

I mellemtiden er det lykkes samme organisation at lave en epp server på ca 5 år, som pt har ca 20% funktionalitet, knudret den ind i en business model de dårligt selv kan forklarer, samtidigt med at EURid har udviklet en fuld epp server, sat den online, og adminstrerer millioner af domæner på den.

Det lyder fandme vildt

Vi har DNSSEC på .dk , ja altså, vi har vores helt egen hjemmeristet elendige løsning som kun bruges af teknikkere. Samtidigt overhales vi af sverige, norge, tyskland, holland og grønland nok snart. på antal zoner som har valide dnssec records.

Det lyder fandme vildt

Vi skal nu åbenbart også som registrator have audits af vores forhold, for dk-hostmaster finder det ikke nok at vi har en kontrakt med dem, men de vil også ændre vores kontrakt så de kan lave audits over for os og udføre straffe aktioner. Hvad sker der med kontrakten vi har i dag, hvorfor er det ikke nok bare at håndhæve den overfor de elementer som de mener der skal have håndhævet deres kontrakt? Fordi de måske vil tvinge os til tavshed som en del af det?

Det lyder fandme vildt

I mange år har vi skreget, på vegne af registranterne, da forbrugerrådet tilsyneladende kun er interesseret i at hæve penge for bestyrelseposten i difo og dkhm, og ikke at varetage registranternes interesse, at vi fandt de 3 måneders aktiveringsperiode og derefter sletning ulovlige. Sidste møde satte vi streg under det, og nu puff, skal den fjernes. “fordi det er bedre for registranterne”.. Nok mere fordi det er bundulovligt at sælge noget og så slette det? Åbenhed? Der findes kun et ordinært bestyrelsesmøde referat online fra i år?

Det lyder fandme vildt!

Glemte jeg at sige, at hvis du som udlænding, dvs, ikke bor i Danmark, så er der INGEN verificerings krav?

Det lyder fandme vildt!!

Det er fandme nok, nu gider jeg satme ikke holde kæft mere. Handsken er af.

Poul-Henning Kamp: Hvilken idiot fik den ide ?

september 25, 2014 - 16:40
Der er ingen relevant forskel på Windows autorun.inf, SQL injections og den totalt roelamme idiotiske ide at environment variabler skal kunne exekveres i /bin/bash. Alle tre ting baserer sig på den samme fundamentale hjerneblødning: At exekvere data som kode uden at være blevet bedt om det. Og...

Peter Toft: BIOS-opgraderinger og UEFI: Er vi på taberkurs med Linux?

september 14, 2014 - 23:15
Jeg har for nylig købt en Gigabyte Brix computer med en Intel Celeron N2807. Maskinen fik en 4GB RAM klods og en SSD disk i og da den leveres uden operativsystem, så måtte jeg i gang med at installere. Jeg smed en Ubuntu 14.04 på maskinen med XBMC for at se hvordan den arter sig. Som sådan virke...

Leif Lodahl: Apache Open Office and LibreOffice should join forces

september 13, 2014 - 12:01
p { margin-bottom: 0.25cm; line-height: 120%; } This proposal has been said many times over the last couple of years and lately repeated by Daniel Brunner, head of the IT department of Switzerland's Federal Supreme Court.https://joinup.ec.europa.eu/community/osor/news/open-and-libre-office-projects-should-reunite. And from the first point of view I can only agree. There is no reason what

Poul-Henning Kamp: CSC sagen

september 12, 2014 - 17:37
Jeg får en dårligere og dårligere smag i munden over den der CSC sag. Lad det være sagt med det samme at jeg slet ikke tager stilling til skyldsspørgsmålet, og at min opfattelse er at både de anklagede, rigspolitiet, anklagemyndigheden og CSC er nogle inkompetente klaphatte -- alle til hobe. Me...

Jesper Dangaard Brouer: Mini-tutorial for netperf-wrapper setup on RHEL6/CentOS6

september 12, 2014 - 14:09
The tool "netperf-wrapper" (by +Toke Høiland-Jørgensen <toke(at)toke.dk>) is very useful for repeating network measurements, that involves running multiple concurrent instances of testing tools (primarily netperf, iperf and ping, but also tools like d-itg and http-getter).


The tools is best known in the bufferbloat community for it's test Realtime Response Under Load (RRUL), but the netperf-wrapper tool has other tests that I find useful.
Core software dependencies are recent versions of netperf, python, matplotlib and fping (optional are d-itg and http_runner).

Dependency issues on RHEL6First dependencies are solved easily by installing "python-matplotlib":
 $ sudo yum install -y python-matplotlib python-pip

The software dependencies turned out to be a challenge on my RHEL6 box.

The "ping" program is too old to support option "-D" (prints timestamp before each-line).  Work-around is to install "fping", which I choose to do from "rpmforge":

Commands needed for install "fping":
 # rpm -Uvh http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
 # yum install -y fping

The "netperf" tool itself (on RHEL6) were not compiled with configure option "--enable-demo=yes" which is needed to get timestamp and continuous result output during a test-run.

Thus, I needed to recompile "netperf" manually:


Install netperf-wrapperInstallation is quite simple, once the dependencies have been meet:

  • git clone https://github.com/tohojo/netperf-wrapper.git
  • cd netperf-wrapper
  • sudo python2 setup.py install


GUI modeThere is a nice GUI mode for investigating and comparing results, started by:
 $ netperf-wrapper --gui

This depend on matplotlib with Qt4 (and PyQt4), which unfortunately were not available for RHEL6. Fortunately there were a software package for this on Fedora, named "python-matplotlib-qt4".

For GUI mode netperf-wrapper needs: matplotlib with Qt4
 $ sudo yum install -y python-matplotlib-qt4 PyQt4

Thus, the workflow is to run the tests on my RHEL6 machines, and analyze the result files on my Fedora laptop.


Using the toolThe same tool "netperf-wrapper" is both used for running the test, and later analyzing the result.

Listing the tests available:
 $ netperf-wrapper --list-tests

For listing which plots are available for a given test e.g. "rrul":
 $ netperf-wrapper --list-plots rrul

Before running a test towards a target system, remember to start the "netserver" daemon process on the target host (just run command "netserver" nothing else).

Start a test-run towards e.g. IP 192.168.8.2 with test rrul
 $ netperf-wrapper -H 192.168.8.2 -t my_title rrul

It is recommend using the option "-t" to give your test a title, which makes is easier to distinguish when comparing two or more test files in e.g. the GUI tool.

The results of the test-run will be stored in a compressed json formatted text file, with the naming convention: rrul-2014-MM-DDTHHMMSS.milisec.json.gz

To view the result, without the GUI, run:
 $ netperf-wrapper -i rrul_prio-2014-09-10T125650.993908.json.gz -f plot
Or e.g. selecting a specific plot like "ping_cdf"
 $ netperf-wrapper -i rrul_prio-2014-09-10T125650.993908.json.gz -f plot -p ping_cdf

netperf-wrapper can also output numeric data suitable for plotting in org-mode or .csv (spreadsheet) format, but I didn't play with those options.



Updates
Updates: A release 0.7.0 of netperf-wrapper is pending



Extra: On bufferbloat
Interested in more about bufferbloat?

Too few people are linking to the best talk explaining bufferbloat and how it's solved by Van Jacobson (slides).  The video quality is unfortunately not very good.

I've used some of Van's point in my own talk about bufferbloat: Beyond the existences of Bufferbloat, have we found the cure? (slides)

Jesper Dangaard Brouer: Network setup for accurate nanosec measurements

september 11, 2014 - 14:37
As I described in my previous blogpost, I'm leveraging the PPS measurements to deduct the nanosec improvements I'm making to the code.

One problem with using this on the nanosec scale is the accuracy of your measurements, which depend on the accuracy of the hardware your are using.

Modern systems have power-saving and turbo-boosting features build into the CPUs.  And Hyper-Threading technology that allows one CPU core to appear as two CPUs, by sharing ALUs etc.

While establish an accurate baseline for some upstream measurements (subj: Get rid of ndo_xmit_flush / commit 0b725a2ca61). I was starting to see too much variation in my trafgen measurements.

I created a rather-large oneliner, that I have converted into a script here: https://github.com/netoptimizer/network-testing/blob/master/bin/mon-ifpps
Which allowed me to get a picture of the accuracy of my measurements, and they are not accurate enough. (For more real stats like std-dev consider running these measurements through Rusty Russell's tool
https://github.com/rustyrussell/stats)

My findings:

  1. Disable all C states and P states.
  2. Disabling Hyper-Threading and power-management in BIOS helped the accuracy
  3. 10Gbit/s ixgbe ring-buffer cleanup interval also influenced accuracy

Reading +Jeremy Eder's blog post. It seems the best method for disabling these C and P states, and
keeping all CPUs in C0/C1 state is doing:

 # tuned-adm profile latency-performance

I found the most stable ring-buffer cleanup interval for the ixgbe driver were 30 usecs. Configured by
cmdline:

 # ethtool -C eth5 rx-usecs 30


Besides these tunings: my blogpost on "Basic tuning for network overload testing" should still be followed.
Generally I've started using the "profile latency-performance", but unless I need to measure some specific code change, I'm still using the ixgbe's default "dynamic" ring-buffer cleanup interval.

Details about the "ethtool -C" tuning is avail in blogpost "Pktgen for network overload testing".

Jesper Dangaard Brouer: Packet Per Sec measurements for improving the Linux Kernel network stack

september 11, 2014 - 12:59
Why I'm using Packet Per Second (PPS) tests for measuring the improvement in performance (of the Linux Kernel network stack).  Many people (e.g. other kernel developer) does not understand why I'm using PPS measurements, this blogpost explains why.

The basic problem of using large MTU (usually 1500 bytes) size packets, is that the transmission delay itself, is enough to hide any improvement I'm making (e.g. a faster lookup function).

Transmission delay 1514 bytes (+20 bytes for Ethernet overhead) at 10Gbit/s is 1227 nanosec:

  • ((bytes+wireoverhead)*8) / 10 Gbits = time-unit
  • ((1500+14+20)*8)/((10000*10^6))*10^9 = 1227.20 ns

This means, if the network stack can generate (alloc/fill/free) a 1500 byte packet faster than every 1227ns, then it can utilize the bandwidth of the 10Gbit/s link fully.  And yes, we can already do so. Thus, with 1500 bytes frame any stack performance improvement, will only be measurable with by a lower CPU utilization.


Let face it; the kernel have been optimized heavily for the last 20 years.  Thus, the improvements we are able to come up with, is going to be on the nanosec scale.
For example I've found a faster way to clear the SKB, which saves 7 nanosec.  Being able to measure this performance improvement were essential while developing this faster clearing.

Lets assume, the stack cost (alloc/fill/syscall/free) is 1200ns (thus faster than 1227ns), then a 7ns improvement will only be 0.58%, which I can only measure as a lower CPU utilization (as bandwidth limit have been reached), which in practice cannot be measured accurately enough.


By lowering the packet size, the transmission delay the stack cost (alloc/fill/syscall/free) can "hide-behind" is reduced. With the smallest packet size of 64 bytes, this is significantly reduced, to:

  • ((64+20)*8)/((10000*10^6))*10^9 = 67.2ns

This basically exposes stacks cost, as its current cost is larger than 67.2ns.  This can be used for getting some measurements that allow us to actually measure the improvement of the code changes we are making, even-though this "only" translates into reduced CPU usage with big frames (which translates into more processer time for your application).

In packet per sec (pps) this correspond to approx 14.8Mpps:

  • 1sec/67.2ns =>  1/(67.2/10^9) = 14,880,952 pps
  • or directly from the packet size as:
  • 10Gbit / (bytes*8) = (10000*10^6)/((64+20)*8) = 14,880,952 pps

Measuring packet per sec (PPS) instead of bandwidth, have another advantage.  Instead of just comparing how many PPS improvement is seen, then instead translate the PPS into nanosec (between packets).
Comparing nanosec used before and after, will show us the nanosec saved by the given code change.

See, how I used it in this and  this commit to document the actually improvement of the changes I made.

Update: For deducting the nanosec saved by a given code change, to be valid, usually requires isolating your test to utilize a single CPU.


Lets use the 14.8Mpps as an example of howto translate PPS to nanosec:

  • 1sec / pps => (1/14880952*10^9) = 67.2ns

Extra: Just for the calculation exercise.

  • How many packets per sec does 1227.20 ns correspond to:
    • 1sec/1227.2ns =>  1/(1227.2/10^9) = 814,863 pps
  • Can also be calculated directly from the packet size as:
    • 10Gbit / (bytes*8) = (10000*10^6)/((1514+20)*8) = 814,863 pps


Poul-Henning Kamp: NSA's porno-business.

september 2, 2014 - 22:30
I de første afsløringer fra Snowden kom det frem at alle de store "cloud" firmaer var "trojaned" af NSA. I et stort interview med The Guardian for nylig nævner Edward Snowden at adgang til nøgenbilleder var en slags "personalegode" i NSA. Nogle uger senere dukker en masse kendis-nøgenbilleder o...

Peter Toft: Elementary OS - dit næste Linux-valg?

august 31, 2014 - 22:52
I dette blogindlæg ser jeg nærmere på et interessant valg af Linux-variant: Elementary OS, der tilbyder en enkel men lækker brugergrænseflade. Brugergrænsefladen på min computer betyder meget for de fleste af os (og ikke et ord om Windows 8.x) . Indenfor Linux er der Ubuntus Unity, som mange er ...

Poul-Henning Kamp: NemID2: Gør det nu rigtigt 5/5

august 26, 2014 - 11:46
Følgetonen fortsætter: Første afsnit var om gammel historie. Andet afsnit om CPRs historie. Tredje afsnit om Digtal Signatur. Fjerde afsnit om NEMID. Spørgsmålet er hvad vi, Danmark, skal gøre når kontrakten om den nuværende NemID snart udløber. Min holdning er at vi skal gribe det helt and...

Peter Makholm: Version2.dk: Hvordan implementeres autentificering korrekt?

august 25, 2014 - 09:39
Gennem tiden har jeg flere gange implementeret forskellige former for autentificering. Nogle gange har jeg været bundet af forskellige protokol-beslutninger og andre gange har jeg været mere frit stillet. Derfor har jeg også haft mulighed for at samle mig en række ideer om hvordan autentificerin...

Anton Berezin: YAPC::Europe 2014, day 2

august 23, 2014 - 16:23

Ignat Ignatov talked about physical formulas. When I was planning to attend this talk, I thought it is going to be some sort of symbolic formulas computation, possibly with an analysis of dimensions of the physical quantities.
However, despite my (a bit long in the tooth) background in physics, I did not understand a word of it. Apparently, some sort of unification of physical formulas, not entirely unlike the periodic table in chemistry, was presented, with almost no comprehensible details and with scary words like co-homology and algebraic topology. The fact that half of the slides were in Russian, while irrelevant for me personally, probably did not help matters for the majority of the people in the audience. I did not expect any questions at the end of the talk, but there were at least two, so I was probably wrong about general level of understanding in the audience.

Laurent Dami talked about SQL::Abstract::FromQuery. He presented a query form of the Request Tracker and said that it is too complex - a premise many would agree with. The conclusion was that some more natural way to allow the user to specify complex queries is needed. Surprizingly, the answer to that was to use a formal grammar and make the user adhere to it. To me this sounds weird, but if one can find a non-empty set of users that would tolerate this, it may just work.

Denis Banovic talked about Docker, a virtualization container. I did not know much about Docker until this point, so it was useful to have someone to explain it to me.

The next talk was long, 50 minutes (as opposed to a somewhat standard for this conference 20 minutes) Peter "ribasushi" Rabbitson presented a crash-course in SQL syntax and concepts. It looked like a beginner-level introduction to SQL, but it became better and better as it progressed. I even learned a thing or two myself. ribasushi has a way of explaining rather complicated things concisely, understandably, and memorizably at the same time. Excellent talk.

Then there was a customary Subway sandwiches lunch.

Naim Shafiyev talked about network infrastructure automatization. Since this is closely related to what I do at my day job, I paid considerable attention to what he had to say. I did not hear anything new, but hopefuly the rest of the audience found the talk more useful. It did inspire me to submit a lightning talk though.

osfameron talked about immutable data structures in Perl and how to clone them with modifications, while making sure that the code does not look too ugly. Pretty standard stuff for functional languages, but pretty unusual in the land of Perl. The presentation was lively, with a lot of funny pictures and Donald duck examples.

The coffee break was followed by another session of lightning talks, preceeded by a give-away of a number of free books for the first-time YAPC attendees. Among the talks I remembered were SQLite virtual tables support in Perl by Laurent Dami, web-based database table editor by Simun Kodzoman, LeoNerd's presentation about XMPP replacement called Matrix, a Turing-complete (even if obfuscated) templating system by Jean-Baptiste Mazon of Sophia (sp!), and annoucements of Nordic Perl Workshop 2014 (Helsinki, November) and Nordic Perl Workshop 2015 (Oslo, May).

Again, I did not go to the end-of-the-day keynote.

As a side note, the wireless seemed to be substantially more flaky than yesterday, which has affected at least some lightning talk presenters.

Anton Berezin: YAPC::Europe 2014, day 1

august 22, 2014 - 22:54

When I came to the venue 15 minutes before the official start of the registration, people at the registration desk were busily cutting sheets of paper into attendees' badges. Finding my badge turned out to be a tad not trivial.

This conference is somewhat unusual not only because it is conducted over the weekend instead of in the middle of the week, but also because the keynotes for every day are pushed till the end, even after the daily lightning talks session.

The welcome talk from Marian was about practical things such as rooms locations, dinner, lunches, transportations and so on. Then I went on stage to declare the location of YAPC::Europe 2015 (which is Granada, Spain by the way). After that Jose Luis Martinez from Barcelona.pm did a short presentation of YAPC in Granada, and Diego Kuperman gave a little present from Granada to Sofia.

Mihai Pop of Cluj.pm presented a talk called "Perl Secret". It was basically a 20-minutes version of BooK's lightning talk about Perl secret operators, somewhat duluted by interspersing references to minions. It was entertaining.

The great Mark Overmeer talked about translation with context. He went beyond the usual example of multiple variants of plural values in some languages, and talked about solving localization problems related to gender and so on. The module solving these problems is Log::Report::Translate::Context. As always, great attention to details from Mark.

After lunch (sandwiches from Subway), Alex Balhatchet of Nestoria presented hurdles of geocoding, with solutions. I and my co-workers had encountered similar problems on a far smaller scale, so I could understand the pains, and had a great interest in hearing about the solutions.

Then I attended a very inspiring talk by Max Maischein from Frankfurt about using Perl as a DNLA remote and as a DNLA media server. I immediately felt the urge to play with the code he published and try to adapt it to my own TV at home. There was even a live demo of using DNLA to stream to Max's laptop a live stream of the talk provided by the conference organizers. And it even worked, mostly.

Ervin Ruci talked more about geocoding — this talk was partially touching the same problems Alex Balhatchet was talking about. Unfortunately, it was substantially less detailed, so I was somewhat underwhelmed by it. The presenter mentioned cool things like dealing with fuzzyness of the input data using hidden Markov models, but did not expand on them.

StrayTaoist described how to access raw data from space telescopes using (of course) Perl. Very lively talk. There was a lot of austronomy porn in here.

Luboŝ Kolouch from Czech Republic talked about automotive logistics, and how open source solutions work where proprietory solutions do not. The software needs to be reliable enough to make sure that it takes only 1.5 hours between the part order and its physical delivery to the factory.

After coffee break with more mingling the inimitable R Geoffrey Avery choir-mastered an hour of lightning talks. Most talks were somewhat "serious" today; I hope we see more "fun" ones in the next coming days.

Unfortunately, I missed the first keynote of the conference from Curtis "Ovid" Poe, so cannot really say anything about it.

Finally, we went to Restaurant Lebed for the conference dinner. The location is superb, there is a great view over a lake. The food was great, too. We also got to enjoy some ethnic Bulgarian music and dancing, not too much, and not too little.

Lots of cheers to Marian and the team of volunteers for organizing what so far turns out to be a great conference.

Poul-Henning Kamp: NEMID hvordan 4/mange

august 22, 2014 - 14:37
Følgetonen fortsætter: Første afsnit var om gammel historie. Andet afsnit om CPRs historie. Tredje afsnit om Digtal Signatur. Nu er vi nået til NemID som vi kender den idag og hvorfor den endte som den gjorde. Lad os tage det gode først: Man indså at der skulle en eller anden tofaktor authe...

Poul-Henning Kamp: Digital Signatur: NemID prototypen 3/mange

august 19, 2014 - 18:15
Første afsnit var om gammel historie. Andet afsnit om CPRs historie. Nu er turen kommet til den Digitale Signatur der var den umiddelbare forløber for NEMID som vi kender det. For nu at citere en der arbejdede rigtig meget med den Digitale Signatur, så var der kun fire problemer med den: Sign...

Jesper Nyerup: Mirroring Ceph

august 18, 2014 - 09:33

I’m glad to announce that One.com‘s public mirror service has begun mirroring Ceph‘s download section. Ceph is a distributed object store and file system, which scales elegantly and has excellent fault tolerance. Ceph has official mirrors in the Western US and the Netherlands, and a handful of community driven mirrors all over the world — now including this one in Denmark, well connected in Northern Europe. We welcome anyone using it to suit their needs. The Ceph mirror is available over HTTP here, and is also available over Rsync and FTP.

One.com run their mirror service both for operational independence and to be able to give something back to the open source software community. The service mirrors a number of open source projects, and more are added frequently. I’m lucky to be part of the team of mirror maintainers, and we’d love to hear from you if you have questions or ideas for the service.

Peter Toft: Hvad ved Google om mig... En hel del! (del 2/2)

august 17, 2014 - 15:56
Hvor det forrige blog-indlæg omhandlede Google maps, så er det også interessant at se hvad Google mener jeg har interesser i. På http://www.google.com/settings/ads er der lidt interessant læsning for mig (I har nok tilsvarende) At der i listen over hvad Google mener jeg interesserer mig for s...

Peter Toft: Hvad ved Google om mig... En hel del! (del 1/2)

august 17, 2014 - 15:55
Flere af mine venner på Facebook postede tilsammen en del Google-information, som er værd at samle op på her. Jeg har delt de to historier i to blog-indlæg - dette og et andet (tryk her for at læse med). Jeg bruger Google maps ret så ofte, især til at undgå at køre mod et sted hvor man kører for...