## planet.opensource.dk

Planet OpenSource DK - http://planet.opensource.dk/
Opdateret: 50 minutter 17 sekunder siden

### Peter Toft: Haves: Skod ADSL linje. Ønskes: Virtuel server

maj 22, 2013 - 18:20
Jeg har kørt egen Linux-mail og web-server i mange år hjemme. Det er gået fint, men den sidste måned har været en lang krise. Min Telenor ADSL-linie er ustabil. Jeg mister forbindelsen kortvarigt - 30 sekunder - et par gange per dag. Jeg har kunne køre ca 5 MBit/sek ned og ca 1 MBit op - men ikke...

### Poul-Henning Kamp: Haldor Topsøe: Computerkunde #1 &amp; #2

maj 21, 2013 - 07:05
Haldor Topsøe er død. Jeg vil overlade til andre at skrive de officielle nekrologer og nøjes med at påpege at han var computer kunde nummer #1 og #2 i den danske computerbranche. Haldor Topsøe's hovedforretning for 50-60 år siden var ammoniakfabrikker baseret på firmaets katalysatorer. Katalys...

### Klavs Klavsen: NemID - hjælper hardware tokens?

maj 18, 2013 - 08:57

Jeg står overfor at skulle melde min flytning til CPR registeret, og det falder som et par andre ting, under den nye lov af d. 1 december 2012, om tvungen digital selvbetjening - så derfor fik jeg en udfordring,da hverken min kone eller jeg har tillid nok til NemID, til at have en OCES del (signatur delen) af NemID.

Jeg kan konstatere at mange jeg ellers ved har stor teknisk kompetance, slet ikke forstår hvorfor jeg ikke har OCES delen aktiveret på mit NemID - og i sidste instans en argumentation om at jeg da så bare må få det hardware token de nu (endelig) er kommet med.

### Peter Toft: Hvorfor blev min disk fyldt op?

maj 16, 2013 - 23:02
Da jeg var ansat hos Nokia skrev min kvikke UNIX-guru Anders et snedigt disk-analyse-program. Design-ideen var at man kunne få information på alle katalog-niveauet om hvor gammel data er neden under og hvor mange GB, der lå gemt nedenunder. I et stort firma kom der ofte kæmpe data-mængder og ing...

### Søren Sandmann: Fast Multiplication of Normalized 16 bit Numbers with SSE2

maj 16, 2013 - 07:14

If you are compositing pixels with 16 bits per component, you often need this computation:

uint16_t a, b, r; r = (a * b + 0x7fff) / 65535;

There is a well-known way to do this quickly without a division:

uint32_t t; t = a * b + 0x8000; r = (t + (t >> 16)) >> 16;

Since we are compositing pixels we want to do this with SSE2 instructions, but because the code above uses 32 bit arithmetic, we can only do four operations at a time, even though SSE registers have room for eight 16 bit values. Here is a direct translation into SSE2:

a = punpcklwd (a, 0); b = punpcklwd (b, 0); a = pmulld (a, b); a = paddd (a, 0x8000); b = psrld (a, 16); a = paddd (a, b); a = psrld (a, 16); a = packusdw (a, 0);

But there is another way that better matches SSE2:

uint16_t lo, hi, t, r; hi = (a * b) >> 16; lo = (a * b) & 0xffff; t = lo >> 15; hi += t; t = hi ^ 0x7fff; if ((int16_t)lo > (int16_t)t) lo = 0xffff; else lo = 0x0000; r = hi - lo;

This version is better because it avoids the unpacking to 32 bits. Here is the translation into SSE2:

t = pmulhuw (a, b); a = pmullw (a, b); b = psrlw (a, 15); t = paddw (t, b); b = pxor (t, 0x7fff); a = pcmpgtw (a, b); a = psubw (t, a);

This is not only shorter, it also makes use of the full width of the SSE registers, computing eight results at a time.

Unfortunately SSE2 doesn’t have 8-bit variants of pmulhuw, pmullw, and psrlw, so we can’t use this trick for the more common case where pixels have 8 bits per component.

Exercise: Why does the second version work?

### Søren Sandmann: Sysprof 1.1.8

maj 16, 2013 - 07:14

A new version 1.1.8 of Sysprof is out.

This is a release candidate for 1.2.0 and contains mainly bug fixes.

### Søren Sandmann: Gamma Correction vs. Premultiplied Pixels

maj 16, 2013 - 07:14

Pixels with 8 bits per channel are normally sRGB encoded because that allocates more bits to darker colors where human vision is the most sensitive. (Actually, it’s really more of a historical accident, but sRGB nevertheless remains useful for this reason). The relationship between sRGB and linear RGB is that you get an sRGB pixel by raising each component of a linear pixel to the power of $1/2.2$.

A lot of graphics software does alpha blending directly on these sRGB pixels using alpha values that are linearly coded (ie., an alpha value of 0 means no coverage, 0.5 means half coverage, and 1 means full coverage). Because alpha blending is best done with premultiplied pixels, such systems store pixels in this format:

[ alpha, alpha * red_s, alpha * green_s, alpha * blue_s ]

where alpha is linearly coded, and (red_s, green_s, blue_s) are sRGB coded. As long as you are happy with blending in sRGB, this works well. Also, if you simply discard the alpha channel of such pixels and display them directly on a monitor, it will look as if the pixels were alpha blended (in the sRGB space) on top of a black background, which is the desired result.

But what if you want to blend in linear RGB? If you use the format above, some expensive conversions will be required. To convert to premultiplied linear, you have to first divide by alpha, then raise each color to 2.2, then multiply by alpha. To convert back, you must divide by alpha, raise to $1/2.2$, then multiply with alpha.

The conversions can be avoided if you store the pixels linearly, ie., keeping the premultiplication, but coding red, green, and blue linearly instead of as sRGB. This makes blending fast, but the downside is that you need deeper pixels. With only 8 bits per pixel, the linear coding loses too much precision in darker tones. Another problems is that to display these pixels, you will either have to convert them to sRGB, or if the video card can scan them out directly, you have to make sure that the gamma ramp is set to compensate for the fact that the monitor expects sRGB pixels.

[ alpha, alpha_s * red_s, alpha_s * green_s, alpha_s * blue_s ]

That is, the alpha channel is stored linearly, and the color channels are stored in sRGB, premultiplied with the alpha value raised to 1/2.2. Ie., the red component is now

(red * alpha)^(1/2.2),

where before it was

alpha * red^(1/2.2).

It is sufficient to use 8 bits per channel with this format because of the sRGB encoding. Discarding the alpha channel and displaying the pixels on a monitor will produce pixels that are alpha blended (in linear space) against black, as desired.

You can convert to linear RGB simply by raising the R, G, and B components to 2.2, and back by raising to $1/2.2$. Or, if you feel like cheating, use an exponent of 2 so that the conversions become a multiplication and a square root respectively.

This is also the pixel format to use with texture samplers that implement the sRGB OpenGL extensions (textures and framebuffers). These extensions say precisely that the R, G, and B components are raised to 2.2 before texture filtering, and raised to 1/2.2 after the final raster operation.

### Søren Sandmann: Over is not Translucency

maj 16, 2013 - 07:14

The ">">">Porter/Duff Over operator, also known as the “Normal” blend mode in Photoshop, computes the amount of light that is reflected when a pixel partially covers another:

The fraction of bg that is covered is denoted alpha. This operator is the correct one to use when the foreground image is an opaque mask that partially covers the background:

A photon that hits this image will be reflected back to your eyes by either the foreground or the background, but not both. For each foreground pixel, the alpha value tells us the probability of each:

$a \cdot \text{fg} + (1 - a) \cdot \text{bg}$

This is the definition of the Porter/Duff Over operator for non-premultiplied pixels.

But if alpha is interpreted as translucency, then the Over operator is not the correct one to use. The Over operator will act as if each pixel is partially covering the background:

Which is not how translucency works. A translucent material reflects some light and lets other light through. The light that is let through is reflected by the background and interacts with the foreground again.

Let’s look at this in more detail. Please follow along in the diagram to the right. First with probability $a$, the photon is reflected back towards the viewer:

$a \cdot \text{fg}$

With probability $(1 - a)$, it passes through the foreground, hits the background, and is reflected back out. The photon now hits the backside of the foreground pixel. With probability $(1 - a)$, the foreground pixel lets the photon back out to the viewer. The result so far:

\begin{align*} &a\cdot \text{fg} \\ +&(1 - a) \cdot \text{bg} \cdot (1 - a) \end{align*}

But we are not done yet, because with probability $a$ the foreground pixel reflects the photon once again back towards the background pixel. There it will be reflected, hit the backside of the foreground pixel again, which lets it through to our eyes with probability $(1 - a)$. We get another term where the final $(1 - a)$ is replaced with $a \cdot \text{fg} \cdot \text {bg} \cdot (1 - a)$:

\begin{align*} &a\cdot \text{fg} \\ +&(1 - a) \cdot \text{bg} \cdot (1 - a)\\ +&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a) \end{align*}

And so on. In each round, we gain another term which is identical to the previous one, except that it has an additional $a \cdot \text{fg} \cdot \text{bg}$ factor:

\begin{align*} &a\cdot \text{fg} \\ +&(1 - a) \cdot \text{bg} \cdot (1 - a)\\ +&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a)\\ +&(1 - a) \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot a \cdot \text{fg} \cdot \text{bg} \cdot (1 - a) \\ +&\cdots \end{align*}

or more compactly:

$\displaystyle a \cdot \text{fg} + (1 - a)^2 \cdot \text{bg} \cdot \sum_{i=0}^\infty (a \cdot \text{fg} \cdot \text{bg})^i$

Because we are dealing with pixels, both $a$, $\text{fg}$, and $\text{bg}$ are less than 1, so the sum is a geometric series:

$\displaystyle \sum_{i=0}^\infty x^i = \frac{1}{1 - x}$

Putting them together, we get:

$\displaystyle a \cdot \text{fg} + \frac{(1 - a)^2 \cdot bg}{1 - a \cdot \text{fg} \cdot \text{bg}}$

I have sidestepped the issue of premultiplication by assuming that background alpha is 1. The calculations with premultipled colors are similar, and for the color components, the result is simply:

$\displaystyle r = \text{fg} + \frac{(1 - a_\text{fg})^2 \cdot \text{bg}}{1 - \text{fg}\cdot\text{bg}}$

The issue of destination alpha is more complicated. With the Over operator, both foreground and background are opaque masks, so the light that survives both has the same color as the input light. With translucency, the transmitted light has a different color, which means the resulting alpha value must in principle be different for each color component. But that’s not possible for ARGB pixels. A similar argument to the above shows that the resulting alpha value would be:

$\displaystyle r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - \text{fg} \cdot \text{bg}}$

where $b$ is the background alpha. The problem is the dependency on $\text{fg}$ and $\text{bg}$. If we simply assume for the purposes of the alpha computation that $\text{fg}$ and $\text{bg}$ are equal to $a$ and $b$, we get this:

$\displaystyle r = 1 - \frac{(1 - a)\cdot (1 - b)}{1 - a \cdot b}$

which is equal to

$\displaystyle a + \frac{(1 - a)^2 \cdot b}{1 - a \cdot b}$

Ie., exactly the same computation as the one for the color channels. So we can define the Translucency Operator as this:

$\displaystyle r = \text{fg} + \frac{(1 - a)^2 \cdot \text{bg}}{1 - \text{fg} \cdot \text{bg}}$

for all four channels.

Here is an example of what the operator looks like. The image below is what you will get if you use the Over operator to implement a selection rectangle. Mouse over to see what it would look like if you used the Translucency operator.

Both were computed in linear RGB. Typical implementations will often compute the Over operator in sRGB, so that’s what see if you actually select some icons in Nautilus. If you want to compare all three, open these in tabs:

Over, in sRGB

Translucency, in linear RGB

Over, in linear RGB

And for good measure, even though it makes zero sense to do this,

Translucency, in sRGB

### Søren Sandmann: Sysprof 1.2.0

maj 16, 2013 - 07:14

A new stable releasenew stable release of Sysprof is now available. Download version 1.2.0.

### Søren Sandmann: Big-O Misconceptions

maj 16, 2013 - 07:14

In computer science and sometimes mathematics, big-O notation is used to talk about how quickly a function grows while disregarding multiplicative and additive constants. When classifying algorithms, big-O notation is useful because it lets us abstract away the differences between real computers as just multiplicative and additive constants.

Big-O is not a difficult concept at all, but it seems to be common even for people who should know better to misunderstand some aspects of it. The following is a list of misconceptions that I have seen in the wild.

But first a definition: We write

$f(n) = O(g(n))$

when $f(n) \le M g(n)$ for sufficiently large $n$, for some positive constant $M$.

Misconception 1: “The Equals Sign Means Equality”

$f(n) = O(g(n))$

is a widespread travestry. If you take it at face value, you can deduce that since $5 n$ and $3 n$ are both equal to $O(n)$, then $3 n$ must be equal to $5 n$ and so $3 = 5$.

The expression $f(n) = O(g(n))$ doesn’t type check. The left-hand-side is a function, the right-hand-side is a … what, exactly? There is no help to be found in the definition. It just says “we write” without concerning itself with the fact that what “we write” is total nonsense.

The way to interpret the right-hand side is as a set of functions:

$O(f) = \{ g \mid g(n) \le M f(n) \text{ for some $$M > 0$$ for large $$n$$}\}.$

With this definition, the world makes sense again: If $f(n) = 3 n$ and $g(n) = 5 n$, then $f \in O(n)$ and $g \in O(n)$, but there is no equality involved so we can’t make bogus deductions like $3=5$. We can however make the correct observation that $O(n) \subseteq O(n \log n)\subseteq O(n^2) \subseteq O(n^3)$, something that would be difficult to express with the equals sign.

Misconception 2: “Informally, Big-O Means ‘Approximately Equal’"

If an algorithm takes $5 n^2$ seconds to complete, that algorithm is $O(n^2)$ because for the constant $M=7$ and sufficiently large $n$, $5 n^2 \le 7 n^2$. But an algorithm that runs in constant time, say 3 seconds, is also $O(n^2)$ because for sufficiently large $n$, $3 \le n^2$.

So informally, big-O means approximately less than or equal, not approximately equal.

If someone says “Topological Sort, like other sorting algorithms, is $O(n \log n)$", then that is technically correct, but severely misleading, because Toplogical Sort is also $O(n)$ which is a subset of $O(n \log n)$. Chances are whoever said it meant something false.

If someone says “In the worst case, any comparison based sorting algorithm must make $O(n \log n)$ comparisons” that is not a correct statement. Translated into English it becomes:

“In the worst case, any comparison based sorting algorithm must make fewer than or equal to $M n \log (n)$ comparisons”

which is not true: You can easily come up with a comparison based sorting algorithm that makes more comparisons in the worst case.

To be precise about these things we have other types of notation at our disposal. Informally:

$O()$:Less than or equal, disregarding constants$\Omega()$:Greater than or equal, disregarding constants$o()$:Stricly less than, disregarding constants$\Theta()$:Equal to, disregarding constants

and some more. The correct statement about lower bounds is this: “In the worst case, any comparison based sorting algorithm must make $\Omega(n \log n)$ comparisons. In English that becomes:

“In the worst case, any comparison based sorting algorithm must make at least $M n \log (n)$ comparisons”

which is true. And a correct, non-misleading statement about Topological Sort is that it is $\Theta(n)$, because it has a lower bound of $\Omega(n)$ and an upper bound of $O(n)$.

Misconception 3: “Big-O is a Statement About Time”

Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.

In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.

When someone says that the time complexity of MergeSort is $O(n \log n)$, they usually mean that the number of comparisons that MergeSort makes is $O(n \log n)$. That in itself doesn’t tell us what the time complexity of any particular MergeSort might be because that would depend how much time it takes to make a comparison. In other words, the $O(n \log n)$ refers to comparisons as the primitive operation.

The important point here is that when big-O is applied to algorithms, there is always an underlying model of computation. The claim that the time complexity of MergeSort is $O(n \log n)$, is implicitly referencing an model of computation where a comparison takes constant time and everything else is free.

Which is fine as far as it goes. It lets us compare MergeSort to other comparison based sorts, such as QuickSort or ShellSort or BubbleSort, and in many real situations, comparing two sort keys really does take constant time.

However, it doesn’t allow us to compare MergeSort to RadixSort because RadixSort is not comparison based. It simply doesn’t ever make a comparison between two keys, so its time complexity in the comparison model is 0. The statement that RadixSort is $O(n)$ implicitly references a model in which the keys can be lexicographically picked apart in constant time. Which is also fine, because in many real situations, you actually can do that.

To compare RadixSort to MergeSort, we must first define a shared model of computation. If we are sorting strings that are $k$ bytes long, we might take “read a byte” as a primitive operation that takes constant time with everything else being free.

In this model, MergeSort makes $O(n \log n)$ string comparisons each of which makes $O(k)$ byte comparisons, so the time complexity is $O(k\cdot n \log n)$. One common implementation of RadixSort will make $k$ passes over the $n$ strings with each pass reading one byte, and so has time complexity $O(n k)$.

Misconception 4: Big-O Is About Worst Case

Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.

If someone is talking about the randomized QuickSort and says that it is $O(n \log n)$, they presumably mean that its expected running time is $O(n \log n)$. If they say that QuickSort is $O(n^2)$ they are probably talking about its worst case complexity. Both statements can be considered true depending on what type of running time the functions involved are measuring.

### Søren Sandmann: Porter/Duff Compositing and Blend Modes

maj 16, 2013 - 07:14

In the Porter/Duff compositing algebra, images are equipped with an alpha channel that determines on a per-pixel basis whether the image is there or not. When the alpha channel is 1, the image is fully there, when it is 0, the image isn’t there at all, and when it is in between, the image is partially there. In other words, the alpha channel describes the shape of the image, it does not describe opacity. The way to think of images with an alpha channel is as irregularly shaped pieces of cardboard, not as colored glass. Consider these two images:

When we combine them, each pixel of the result can be divided into four regions:

One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.

By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:

The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:

The Porter/Duff operator that does this is called “Dest Atop”.

There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.

The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:

$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$

Where $[s]$ is either 0 or the color of the source pixel, $[d]$ either 0 or the color of the destination pixel, and $[b]$ is either 0, the color of the source pixel, or the color of the destination pixel. With the alpha channel being interpreted as coverage, the areas are given by these formulas:

$A_\text{src} = \alpha_\text{s} \cdot (1 - \alpha_\text{d})$
$A_\text{dst} = \alpha_\text{d} \cdot (1 - \alpha_\text{s})$
$A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$

The alpha channel of the result is computed in a similar way:

$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$

where $[\text{as}]$ and $[\text{ad}]$ are either 0 or 1 depending on whether the source and destination regions are present, and where $[\text{ab}]$ is 0 when the ‘both’ region is blank, and 1 otherwise.

Here is a table of all the Porter/Duff operators:

$[\text{s}]$ $[\text{d}]$ $[\text{b}]$ Src $s$ $0$ s Atop $0$ $d$ s Over $s$ $d$ s In $0$ $0$ s Out $s$ $0$ $0$ Dest $0$ $d$ d DestAtop $s$ $0$ d DestOver $s$ $d$ d DestIn $0$ $0$ d DestOut $0$ $d$ $0$ Clear $0$ $0$ $0$ Xor $s$ $d$ $0$

And here is how they look:

Despite being referred to as alpha blending and despite alpha often being used to model opacity, in concept Porter/Duff is not a way to blend the source and destination shapes. It is way to overlay, combine and trim them as if they were pieces of cardboard. The only places where source and destination pixels are actually blended is where the antialiased edges meet.

Blending
Photoshop and the Gimp have a concept of layers which are images stacked on top of each other. In Porter/Duff, stacking images on top of each other is done with the “Over” operator, which is also what Photoshop/Gimp use by default to composite layers:

Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.

A layer in these programs also has an associated Blend Mode which can be used to modify what happens in places where both are visible. For example, the ‘Color Dodge’ blend mode computes a mix of source and destination according to this formula:

$\begin{equation*} B(s,d)= \begin{cases} 0 & \text{if $$d=0$$,} \\ 1 & \text{if $$d \ge (1 - s)$$,} \\ d / (1 - s) & \text{otherwise} \end{cases} \end{equation*}$

The result is this:

Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.

Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:

Here are the four combinations that involve a ColorDodge blend mode:

In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:

Source: $B(s, d) = s$ Dest: $B(s, d) = d$ Zero: $B(s, d) = 0$

In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.

Here is a table of the operators that are generated by various blend modes:

The general formula is still an area weighted average:

$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$

where [s] and [d] are the source and destination colors respectively or 0, but where $B(s, d)$ is no longer restricted to one of $0$, $s$, and $d$, but can instead be chosen from a large set of formulas.

The output of the alpha channel is the same as before:

$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$

except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.

### Poul-Henning Kamp: Nedbrydning af den binære grænse

maj 15, 2013 - 12:12
Der sker et stort informationstab når vi compilerer fra kildetekst til binær og jeg tænker ikke bare på at de få sporadisk placerede kommentarer elimineres. Reverse-engineering er en god og gammel sport i branchen, som har fået en særlig opblomstring i virus/malware branchen. Men det er stadigv...

### Peter Hansteen: DNSSEC Mastery, Or How To Make Your Name Service Verifiable And Trustworthy

maj 12, 2013 - 00:51
A DNSSEC book for the working sysadmin, likely to put you ahead of the pack in securing an essential Internet service.

I have a confession to make. Michael W. Lucas is a long time favorite of mine among tech authors. When Michael descends on a topic and produces a book, you can expect the result to contain loads of useful information, presented along with humor and real-life anecdotes so you will want to explore the topic in depth on your own systems.

In DNSSEC Mastery (apparently the second installment in what could become an extensive Mastery series -- the first title was SSH Mastery, reviewed here -- from Michael's own Tilted Windmill Press), the topic is how to make your own contribution to making the Internet name service more reliable by having your own systems present verifiable, trustworthy information.

Before addressing the book itself, I'll spend some time explaining why this topic is important. The Domain Name System (usually referred to as DNS or simply 'the name service' even if nitpickers would be right that there is more than one) is one of the old-style Internet services that was created to solve a particluar set of problems (humans are a lot better at remembering names a than strings of numbers) in the early days of networking when security was not really a concern.

Old-fashioned DNS moves data via UDP, the connectionless no-guarantees-ever protocol mainly because the low protocol overhead in most cases means the answer arrives faster than it would have otherwise. Reliable delivery was sacrificed for speed, and in general, the thing just works. DNS is one of those things that makes the Internet usable for techies and non-techies alike.

The other thing that was sacrificed, or more likely never even considered important enough to care about at the time, was any hope of reliably verifying that the information received via the DNS service was in fact authentic and correct.

When you ask an application to look up a name, say you want to see if anything's new at bsdly.blogspot.com or if you want to send me mail to be delivered at bsdly.net, the answer comes back, not necessarily from the host that answers authoritatively for the domain, but more likely from the cache of a name server near you, and serves mainly one or more IP addresses, with no guarantee other than it is, indeed a record type that contains one or more IP addresses that appear to match your application's query.

Or to put it more bluntly, with traditional DNS, it's possible for a well positioned attacker to feed you falsfied information (ie leading your packets to somewhere they don't belong or to somewhere you never intended, potentially along with your confidential data), even if the original DNS designers appear to have considered the scenario rather unlikely back then in the nineteen-eighties.

With the realization that the Internet was becoming mainstream during the 1990s and that non-techies would rely on it for such things as banking services came support cryptographically enhanced versions of several of the protocols that take care of the bulk of Internet traffic payloads, and even the essential and mostly ignored (at least by non-techies) DNS protocol was enhanced several times over the years. Around the turn of the century came the RFCs that describe cryptographic signatures as part of the enhanced name service, and finally in 2005 the trio of RFCs (4033, 4034 and 4035) that form the core of the modern DNSSEC specification were issued.

But up until quite recently, most if not all DNSSEC implementations were either incomplete or considered experimental, and getting a working DNSSEC setup in place has been an admirable if rarely fulfilled ambition among already overworked sysadmins.

Then at what seems to be the exactly right moment, Michael W. Lucas publishes DNSSEC Mastery, which is a compact and and extremely useful guide to creating your own DNSSEC setup, avoiding the many pitfalls and scary manouvres you will find described in the HOWTO-style DNSSEC guides you're likely to encounter after a web search on the topic.

The book is aimed at the working sysadmin who already has at least basic operational knowledge of running a name service. Starting with one DNSSEC implementation that is known to be complete and functional (ISC BIND 9.9 -- Michael warns early on very clearly that earlier versions will not work -- if your favorite system doesn't have that packaged yet, you can build your own or start bribing or yelling at the relevant package maintainer), this book takes a very practical, hands on approach to its topic in a way that I think is well matched to the intended audience.

Keeping in mind that the one thing a working sysadmin is always short on is time, it is likely a strong advantage that this book is so compact. With 12 chapters, it comes in at just short of 100 pages in the PDF version I used for most of this review. With the stated requirement that the reader needs to be reasonably familiar with running a DNS service, the introductory chapters fairly quickly move on to give an overview of public key cryptography as it applies to DNSSEC, with pointers to wordier sources for those who would want to delve into details, before starting the steps involved in setting up secure name service using ISC BIND 9.9 or newer.

Always taking a practical approach, DNSSEC Mastery covers essentially all aspects of setting up and running a working service, including such topics as key management, configuring and debugging both authoritative and recursive resolvers, various hints for working with or around strengths or deficiencies in various client operating systems, how the new world of DNSSEC influences how you manage your zones and delegations, and did I mention debugging your setup? DNSSEC is a lot less forgiving of errors than your traditional DNS, and Michael includes both some entertaining examples and pointers to several useful resources for testing your work before putting it all into production. And for good measure, the final chapter demonstrates how to distribute data you would not trust to old fashioned DNS: ssh host key fingerprints and SSL certificates.

As I mentioned earlier, this title comes along at what seems to be the perfect time. DNSSEC use is not yet as widespread as it perhaps should be, in part due to incomplete implementations or lack of support in several widely used systems. The free software world is ahead of the pack, and just as the world is getting to realize the importance of a trustworthy Internet name service, this book comes along, aimed perfectly at the group of people who will need an accessible-to-techies book like this one. And it comes at a reasonable price, too. If you're in this book's target group, it's a recommended buy.

The ebook is available in several formats from Tilted Windmill Press, Amazon and other places. A printed version is in the works, but was not available at the time this review was written (May 11, 2013).

Note: Michael W. Lucas gives tutorials, too, like this one at BSDCan in Ottawa, May 15 2003.

Title: DNSSEC Mastery: Securing The Domain Name System With BIND
Author: Michael W. Lucas
Publisher: Tilted Windmill Press (April 2012)
Michael W. Lucas has another, somewhat chunkier book out this year too, Absolute OpenBSD, 2nd edition, a very good book about my favorite operating system. It would have been reasonable to expect a review here of that title too, except that I served as the book's technical editor, and as such a review would be somewhat biased.

But if you're interested in OpenBSD and haven't got your copy of that book yet, you're in for a real treat. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people. (I will, however, be bringing a few copies of The Book of PF and if things work out in time, some other items you may enjoy.)

### Poul-Henning Kamp: Enden på softwarepatenter ?

maj 11, 2013 - 13:02
Der er noget der tyder på at skovlen er kommet godt ind under software patenter i USA. Deres Højesteret har sendt et par sager return, ikke mindst den meget omtalte "in re. Bilski" og dønningerne i de lavere retssale er kun lige begyndt. En spritny dom fra appelretten for "federal circuit" der ...

### Peter Hansteen: The Term Hackathon Has Been Trademarked In Germany. Now Crawl Back Under That Rock, Please.

maj 9, 2013 - 19:54
Trademarking somebody else's idea behind their back is both a bad idea and highly immoral. If it wasn't your idea, you don't trademark and you don't patent. It really is that simple, people.

The news that the term hackathon had been trademarked in Germany reached me late last week, via this thread on openbsd-misc. The ideas sounded pretty ludicrous to me at the time, but I was too busy with other stuff that couldn't wait to start reacting properly, and a few distractions later, I'd forgotten about the whole thing.

Then today, via the Twitter stream, came the news that an outfit trading under the name Young Targets (how cute) had now started sending invoices at EUR 2500 a pop to anybody in Germany who dared use the term. One example has been preserved here by Hannover-based doctape, who had hosted an informal developer meetup earlier this year.

It may come as a surprise to a select few, but if there is somebody, somewhere, who is entitled to making money off that fairly well-known term, it is not that group of Germans. The term hackathon has been in use for a decade at least, and it springs like many other good things from the free software movement. The exact origin of the term is not clear, but one of the more prominent contenders for the first original use is the OpenBSD project. As you can see from the project's hackathons page, informal developer gatherings have most likely been called just that since 1999 at least.

And as anyone with an Internet connection an minimal searching skills will find out, hackathons have been quite crucial in keeping the project moving forward and offering tech goodies everybody uses, all for free and under a permissive license anybody can understand.

These items include the Secure Shell client and server used by 97% of the Internet (OpenSSH), the much praised OpenBSD packet filter PF and a whole host of other useful software that's developed as integral parts of the OpenBSD system but tend to find their way into other products such as those offered by Apple, Blackberry and quite a few others, including Linux distributions.

My brief and not too exhaustive search of mailing list archives tonight seems to turn up this message From Theo de Raadt to openbsd-misc dated July 1st, 2001 as the earliest public reference to a hackathon, but reading Theo's message again today I'm pretty convinced that the term was in common use even back then. If anyone can come up with evidence of use earlier than this, I'd love to hear from you, of course (mail to peter at bsdly dot net preferably with the word hackathon somewhere in the subject will be read with interest, or leave a comment below if you prefer).

I'm no lawyer at the best of times, but trademarking a term that both originated elsewhere and has been in general use for more than a decade seems to me at least highly immoral, and if it's not illegal, it should be. Trademarking a free software term and proceeding to charge EUR 2500 a pop for its use? It will be in your best interest to stay out of my physical proximity, Meine Damen und Herren.

Hot on the heels of what must have been a hectic night for the newly targeted young Berliners comes an announcement that states that they kinda, sorta will consider not charging sufficiently non-profity people for the use anyway, in the fluffiest terms I have ever heard come out of a German.

I'll offer our new targets some practical advice: Stop your nonsense right now, and make a real effort to track down the originators of the hackathon concept. It's likely you wil find that person is either Theo de Raadt or somebody else closely associated with the OpenBSD about the last turn of the century. If you cannot unregister the trademark, transfer the rights, free of charge, to the concept's originator.

Then either return any fees collected from your wrongful registration, or, at your victims' option, donate the equivalent sum to OpenBSD or a charity of your individual victims' choice.

Doing the right thing this late in the game and after messing up this thoroughly most likely won't save you from being the target of some sort of mischief from young hotheads (note that I strongly caution against using extra-legal tactics in this matter), but at least you, members and employees of Young Targets can hope that this embarrasing episode will be forgotten soon enough for you to resume some semblance of carreers in a not too distant future. Please go hide under a rock for now, after you've done the right thing as outlined above.

For anyone else interested in the matter, I strongly urge you to go to the OpenBSD project's donations page to donate, grab some CD sets and/or other swag from the orders page, and if you think you can help out with one or more items listed on the hardware wanted page, that will be very welcome for the project too.

It should be noted that I do not serve in any official capacity for the OpenBSD project. The paragraphs above represent my opinion only, and what I have outlined here should not be considered any kind of offer or representation on behalf of the OpenBSD project.
If you're interested in OpenBSD in general, you have a real treat coming up in the form of Michael W. Lucas' Absolute OpenBSD, 2nd edition. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned, with a preview planned for the BLUG meeting a couple of weeks before the conference. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people. (I will, however, be bringing a few copies of The Book of PF and if things work out in time, some other items you may enjoy.)

### Poul-Henning Kamp: Feudalistiske Startups

maj 6, 2013 - 10:01
Therese har opdateret sin liste over "danske startups" og det er meget fint osv. Det må antages at være underforstået at det skal være noget hipt-IT-cloud noget for kvalificere sig til Thereses liste. Men listen provokerer mig, mest på grund af de bagvedliggende antagelse, der har meget lidt me...

### Martin Pihl: Verdens højeste modeshow

maj 5, 2013 - 14:57

I anledning af 100 års-jubilæet for Herning som købstad har BON’A PARTE deltaget i et projekt med en række andre virksomheder og institutioner. Et projekt der skulle resultere i verdens højeste modeshow.

En australsk kunstnergruppe har benyttet en af vores haller til at lære en masse studerende fra TEKO Designskole at bygge 4 meter høje dukker, der fredag den 3. maj skulle gå catwalk i Herning. De studerende har bygget 10 dukker der repræsenterer 10 tekstilvirksomheder fra Herning-regionen, som udover BON’A PARTE også tæller virksomheder som JBS, Egetæpper, KABOOKI m.fl.

Se regionalt indslag fra begivenheden her:

http://www.tvmidtvest.dk/nettv/?id=24517

### Peter Hansteen: Keep smiling, waste spammers' time

maj 4, 2013 - 21:28
When you're in the business of building the networks people need and the services they need to run on them, you may also be running a mail service. If you do, you will sooner or later need to deal with spam. This article is about how to waste spammers' time and have a good time while doing it.

Assembling the parts

To take part of the fun and useful things in this article, you need a system with PF, the OpenBSD packet filter. If you're reading this magazine you are likely to be running all important things on a BSD already, and all the fully open source BSDs by now include PF (as do the commercialized variants sold by the Apple and Blackberry), developed by OpenBSD but also ported to the other BSDs. On OpenBSD, it's the packet filter, and if you're running FreeBSD, NetBSD or DragonFlyBSD it's likely to be within easy reach, either as a loadable kernel module or as a kernel compile-time option.

Getting started with PF is surprisingly easy. The official documentation such as the PF FAQ is very comprehensive, but you may be up and running faster if you buy The Book of PF or do what almost 150,000 others have done before you: Download or browse the free forerunner from http://home.nuug.no/~peter/pf. Or do both, if you like.

Network design issues
A PF setup can be, and to my mind should be, quite unobtrusive. For the activities in this article it does not matter much where you run your PF filtering, as long as it is somewhere in the default path of your incoming SMTP traffic. A gateway with PF is usually an excellent choice, but if it suits your needs better, it is quite feasible to do the filtering needed for this article on the same host your SMTP server runs.

Enter spamd
OpenBSD's spamd, the spam deferral daemon (not to be confused with the program with the same name from the SpamAssassin content filtering system), first appeared in OpenBSD 3.3. The original spamd was a tarpitter with a very simple mission in life. Its spamd-setup program would take a list of known bad IP addresses, that is, the IP addresses of machines known to have sent spam recently, and load it into a table. The main spamd program would then have any SMTP traffic from hosts in that table redirected to it, and spamd would answer those connections s-l-o-w-l-y, by default one byte per second.

A minimal PF config
As man spamd will tell you, the bare minimum to get spamd running in a useful mode on systems with PF version 4.1 or later is

table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"
pass in on egress proto tcp from any to any port smtp \
rdr-to 127.0.0.1 port spamd
pass in on egress proto tcp from <nospamd> to any port smtp
pass in log on egress proto tcp from <spamd-white> to any port smtp
pass out log on egress proto tcp to any port smtp

Or, in the pre-OpenBSD 4.7 syntax still in use on some systems,

table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"
no rdr inet proto tcp from <spamd-white> to any \
port smtp
rdr pass inet proto tcp from any to any \
port smtp -> 127.0.0.1 port spamd

This means, essentially, that any smtp traffic from hosts that are not already in the table spamd-white will be redirected to localhost, port spamd, where you have set up the spam deferral daemon spamd to listen for connections. Enabling spamd, on the other hand, is as easy as adding spamd_flags="" to your /etc/rc.conf.local if you run OpenBSD or /etc/rc.conf if you run FreeBSD (Note that on FreeBSD, spamd is a port, so you need to install that before proceeding. Also, on recent FreeBSDs, the rc.conf lines are obspamd_enable="YES" to enable spamd and obspamd_flags="" to set any further flags.), and starting it with

$sudo /usr/libexec/spamd or if you are on FreeBSD,$ sudo /usr/local/libexec/spamd

It is also worth noting that if you add the "-d" for Debug flag to your spamd flags, spamd will generate slightly more log information, of the type shown in the log excerpts later in this article.

While earlier versions of spamd required a slightly different set of redirection rules and ran in blacklists-only mode by default, spamd from OpenBSD 4.1 onwards runs in greylisting mode by default. Let's have a look at what greylisting means and how it differs from other spam detection techniques before we exlore the finer points of spamd configuration.

Content versus behavior: Greylisting
When the email spam deluge started happening during the late 1990s and early 2000s, observers were quick to note that the messages in at least some cases messages could be fairly easily classified by looking for certain keywords, and the bulk of the rest fit well in familiar patterns.

Various kinds of content filtering have stayed popular and are the mainstays of almost all proprietary and open source antispam products. Over the years the products have develped from fairly crude substring match mechanisms into multi-level rule based systems that incorporate a number of sophisticated statistical methods. Generally the products are extensively customizable and some even claim the ability to learn based on the users' preferences.

Those sophisticated and even beautiful algorithms do have a downside, however: For each new trick a spam producer chooses to implement, the content filtering becomes incrementally more complex and computationally expensive.

In sharp contrast to the content filtering, which is based on message content, greylisting is based on studying spam senders' behavior on the network level. The 2003 paper by Evan Harris noted that the vast majority of spam appeared to be sent by software specifically developed to send spam messages, and those systems typically operated in a 'fire and forget' mode, only trying to deliver each message once.

The delivery software on real mail servers, however, are proper SMTP implementations, and since the relevant RFCs state that you MUST retry delivery in case you encounter some classes of delivery errors, in almost all cases real mail servers will retry 'after a reasonable amount of time'.

Spammers do not retry. So if we set up our system to say essentially

"My admin told me not to talk to strangers"
- we should be getting rid of anything the sending end does not consider important enough to retry delivering.

The practical implementation is to record for each incoming delivery attempt at least
4. time of first delivery attempt matching 1) through 3)
5. time delivery of retry will be allowed
6. time to live for the current entry
At the first attempt, the delivery is rejected with temporary error code, typically "451 temporary local problem, try again later", and the data above is recorded. Any subsequent delivery attempts matching fields 1) through 3) that happen before the time specified in field 5) are essentially ignored, treated to the same temporary error. When a delivery matching fields 1) through 3) is attempted after the specified time, the IP address (or in some implementations, the whole subnet) is /whitelisted/, meaning that any subsequent deliveries from that IP address will be passed on to the mail service.

The first release of OpenBSD's spamd to support greylisting was OpenBSD 3.5. spamd's greylisting implementation operates only on individual IP addresses, and by default sets the minimum time before a delivery attempt passes to 25 minutes, the time to live for a greylist entry to 4 hours, while a whitelisted entry stays in the whitelist for 36 days after the delivery of the last message from that IP address. With a properly configured setup, machines that receive mail from your outgoing mail servers will automatically be whitelisted, too.

The great advantage to the greylisting approach is that mail sent from correctly configured mail servers will be let through. New correspondents will experience an initial delay for the first message to get through and their IP address is added to the whitelist. The initial delay will vary depending on a combination of the length of your minimum time before passing and the sender's retry interval. Regular correpondents will find that once they have cleared the initial delay, their IP addresses are kept in the whitelist as long as email contact is a regular affair.

And the technique is amazingly effective in removing spam. 80% to 95% or better reduction in the number of spam messages is frequently cited, but unfortunately only a few reports with actual numbers have been published. An often-cited report is Steve Williams' message on opensd-misc (available among other places at marc.info), where Steve describes how he helped a proprietary antispam device cope with an unexptected malware attack. He notes quite correctly that the blocked messages were handled without receiving the message body, so their apparently metered bandwidth use was reduced.

Even after more than four years, greylisting remains extremely effective. Implementing greylisting greatly reduces the load on your content filtering systems, but since messages sent by real mail servers will be let through, it will sooner or later also let a small number of unwanted messages through, and unfortunately it does not eliminate the need for content filtering altogether. Unfortunately you will still occasionally encounter some sites that do not play well with greylisting, see the references for tips on how to deal with those.

Do we need blacklists?
With greylisting taking care of most of the spam, is there still a place for blacklists? It's a fair question. The answer depends in a large part on how the blacklists you are considering are constructed and how much you trust the people who generate them and the methods they use.

The theory behind all good blacklists is that once an IP address has been confirmed as a source of spam, it is unlikely that there will be any valid mail send from that IP address in the foreseeable future.

With a bit of luck, by the time the spam sender gets around to trying to deliver spam to addresses in your domain, the spam sender will already be on the blacklist and will in turn treated to the s-l-o-w SMTP dialogue.

Knowing how a host makes it into a blacklist is important, but a clear policy for checking that the entries are valid and for removing entries is essential too. Once spam senders are detected, it is likely that their owners will do whatever it takes to stop the spam sending. Another reason to champion 'aggressive maintenance' of blacklists is that it is likely that IP addresses are from time to time reassigned, and some ISPs do in fact not guarantee that a certain physical machine will be assigned the same IP address the next time it comes online.

Your spamd.conf file contains a few suggested blacklists. You should consider carefully which ones to use. Take the time you need to look up the web pages listed in the list descriptions in the spamd.conf file and then decide which lists fit your needs. If you decide to use one or more blacklists, edit your spamd.conf to include those and set up a cron job to let spamd-setup load updated blacklists at regular intervals.

The lists I consider the more interesting ones are the nixspam list, with a 4 day expiry, and the uatraps list, with a 24-hour exiry. The nixspam list is maintained by ix.de, based on their logs of hosts that have verifiably sent spam to their mail servers. The uatraps list is worth looking into too, mainly because it is generated automatically by greytrapping.

Behavior based response: Greytrapping
Greytrapping is yet another useful technique that grew out of hands-on empirical study of spammer behavior, taken from the log data available at ordinary mail servers. You have probably seen spam messages offering lists of "millions of verified email addresses" available. However, verification goes only so far. You can get a reasonable idea of the quality of that verification if you take some time to actually browse mail server logs for failed deliveries to addresses in your domain. In most cases you will find a number of attempts at delivering to addresses that either have never existed or at least have no valid reason to receive mail.

The OpenBSD spamd developers saw this too. They also realized that what addresses are deliverable or not in your own domain is something you have complete control over, and they formulated the following rule to guide a new feature to be added to spamd:
"if we have one or more addresses that we are quite sure will never receive valid email, we can safely assume that any mail sent to those addresses is spam"that feature was dubbed greytrapping, and was introduced in spamd in time for the OpenBSD 3.7 release. The way it works is, if a machine that is already greylisted tries to deliver mail to one of the addresses on the list of known bad email addresses, that machine's IP address is added to a special local blacklist called spamd-greytrap. The address stays in the spamd-greytrap list for 24 hours, and any SMTP traffic from hosts in that blacklist is treated to the tarpit for the same period.

This is the way the uatraps list is generated. Bob Beck put a list of addresses he has referred to as 'ghosts of usenet postings past' on his local greytrap list, and started exporting the IP addresses he collects automatically to a freely available blacklist. As far as I know Bob has never published the list of email addresses in his spamtrap list, but the machines at University of Alberta appear to be targeted by enough spammers to count. At the time this article was written, the uatraps list typically contained roughly 120,000 addresses, and the highest number of addresses I have seen reported by my spamd-setup was just over 180,000 (it peaked later at just over 670,000 addresses). See Figure 1 for a graphical representation of the number of hosts in the uatraps list over the period February 2006 through early March 2008.

Figure 1: Hosts in uatraps
By using a well maintained blacklist such as the uatraps list you are likely to add a few more percentage points to the amount of spam stopped before it reaches your content filtering or your users, and you can enjoy the thought of actively wasting spammers' time.

A typical log excerpt for a blacklisted host trying to deliver spam looks like this:

Jan 16 19:55:50 skapet spamd[27153]: 82.174.96.131: connected (3/2), lists: uatraps
Jan 16 19:59:33 skapet spamd[27153]: (BLACK) 82.174.96.131: <bryonRoe@boxerdelasgargolas.com> -> <schurkoxektk@ehtrib.org>
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: From: "bryon Roe" <bryonRoe@boxerdelasgargolas.com>
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: To: schurkoxektk@ehtrib.org
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: Subject: vresdiam
Jan 16 20:02:33 skapet spamd[27153]: 82.174.96.131: disconnected after 403 seconds. lists: uatraps

This particular spammer hung around at a rate of 1 byte per second for 403 seconds (six minutes, forty-three seconds), going through the full dialogue all the way up to the DATA part before my spamd rejected the message back to the spammer's queue.

Figure 2: Connection lengths measured at bsdly.net's spamd
That is a fairly typical connection length for a blacklisted host. Statistics from my sites (see Figure 2) show that most connections to spamd last from 0 to 3 seconds, a few hang on for about 10 seconds, and the next peak is at around 400 seconds. Then there's a very limited number that hang around for anywhere from 30 minutes to several hours, but those are too rare to be statistically significant (and damned near impossible to graph sensibly in relation to the rest of the data.

Interaction with a running spamd: spamdb
Your main interface to the contents of your spamd related data is the spamdb administration program. The command

$sudo spamdb without any parameters will give you a complete listing of all entries in the database, whether WHITE, GREY or others. In addition, the program supports a number of different operations on entries in spamd's data, such as adding or deleting entries or changing their status in various ways. For example,$ sudo spamdb -a 192.168.110.12

will add the host 192.168.110.12 to your spamd's whitelist or update its status to WHITE if there was an entry for that address in the database already. Conversely, the command

$sudo spamdb -d 192.168.110.12 will delete the entry for that IP address from the database. For greytrapping purposes, you can add or delete spamtrap email addresses by using a command such as$ sudo spamdb -T -a wkitp98zpu.fsf@datadok.no

Hitting back, poisoning their well: Summary of my field notes
Up util July 2007, I ran my spamd installations with greylisting, supplemented by hourly updates of the uatraps blacklist and a small local list of greytrapping addresses like the one in the previous section, which is obviously a descendant of a message-id, probably harvested from a news spool or from some unfortunate malware victim's mailbox. Then something happened that made me take a more active approach to my greytrapping.

My log summaries showed me an unusually high number of attempted deliveries to non-existent addresses in the domains I receive mail for. Looking a little closer at the actual logs showed spam backscatter: Somebody, somewhere had sent a large number of messages with made up addresses in one of our domains as the From: or Reply-to: addresses, and in those cases the to: address wasn't deliverable either, the bounce messages were sent back to our servers.

The fact that they were generating bounces to the spam messages indicates that any copies of those messages directed at actually deliverable addresses in those domains would have been delivered to actual users' mailboxes, not too admirable in itself.

Another variety that showed up when I browsed the spamd logs was this type:

Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: Subject: Considered UNSOLICITED BULK EMAIL, apparently from you
Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: From: "Content-filter at srv77.kit.kz" <postmaster@srv77.kit.kz>
Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: To: <skulkedq58@datadok.no>

which could only mean that the administrators at that system had not yet learned that spammers no longer use their own From: addresses.

Roughly at that time it struck me:
1. Spammers, one or more groups, are generating numerous fake and nondeliverable addresses in our domains.
2. adding those generated addresses to our local list of spamtraps is mainly a matter of extracting them from our logs
3. if we could make the spammers include those addresses in their To: addresses, too, it gets even easier to stop incoming spam and shift the spammers to the one-byte-at-a-time tarpit. Putting the trap addresses on a web page we link to from the affected domains' home pages will attract the address slurping robots sooner or later.
or the short version: Let's poison their well!

(Actually in the first discussions about this with my BLUG user group friends, we referred to this as 'brønnpissing' in Norwegian, which translates as 'urinating in their well'. The more detailed descriptions of the various steps in the process can be tracked via blog entries at http://bsdly.blogspot.com, starting with the entry dated Monday, July 9th, 2007, Hey, spammer! Here's a list for you!.)

Over the following weeks and months I collected addresses from my logs and put them on the web page at http://www.bsdly.net/~peter/traplist.shtml.

After a while, I determined that harvesting the newly generated soon-to-be-spamtrap addresses directly from our greylist data was more efficient and easier to script than searching the mail server logs. Using spamdb, you can extract the current contents of the greylist with

$sudo spamdb | grep GREY which produces output in the format GREY|96.225.75.144|Wireless_Broadband_Router|<aguhjwilgxj@bn.camcom.it>|<bsdly@bsdly.net>|1198745212|1198774012|1198774012|1|0 GREY|206.65.163.8|outbound4.bluetie.com|<>|<leonard159@datadok.no>|1198752854|1198781654|1198781654|3|0 GREY|217.26.49.144|mxin005.mail.hostpoint.ch|<>|<earle@datadok.no>|1198753791|1198782591|1198782591|2|0 where GREY is what you think it is, the IP address is the sending host's address, the third entry is what the sender identified as in the SMTP dialogue (HELO/EHLO), the fourth is the From: address, the fifth is the To: address. The next three are date values for first contact, when the status will change from GREY to WHITE and when the entry is set to expire, respectively. The final two fields are the number of times delivery has been blocked from that address and the number of conntections passed for the entry. For our purpose, extracting the made up To: addresses in our domains from backscatter bounces, it is usually most efficient to search for the "<>" indicating bounces, then print the fifth field. Or, expressed in grep and awk:$ sudo spamdb | grep "<>" | awk -F\| '{print $5}' | tr -d '<>' | sort | uniq will give you a sorted list of unique intended bounce-to addresses, in a format ready to be fed to a corresponding script for feeding to spamd. The data above and the command line here would produce earle@datadok.no leonard159@datadok.no - in some situations, the list will be a tad longer than in this ilustration. This does not cover the cases where the spammers apparently assume that any mail with From: addresses in the local domain will go through, even when they come from elsewhere. Extracting the fourth column instead # spamdb | grep GREY | awk -F\| '{print$4}' | grep mydomain.tld | tr -d '<>' | sort | uniq

will give you a list of From: addresses in your own domain to weed out a few more bad ones from.

After a while, I started seeing very visible and measurable effects. At short intervals, we see spam runs targeting the addresses in the published list, working their way down in more or less alphabetical order. For example, in my field notes dated November 25, 2007, I noted

"earlier this month the address capitalgain02@gmail.com  started appearing frequently enough that it caught my  attention in my greylist dumps and log files.

The earliest contact as far as I can see was at  Nov 10 14:30:57, trying to spam wkzp0jq0n6.fsf@datadok.no  from 193.252.22.241 (apparently a France Telecom customer).  The last attempt seems to have been ten days later, at  Nov 20 15:20:31, from the Swedish machine 217.10.96.36.

My logs show me that during that period 6531 attempts  had been made to deliver mail from capitalgain02@gmail.com  via bsdly.net, from 35 different IP addresses, to 131 different  recipients in our domains. Those recipients included three  deliverable addresses, mine or aliases I receive mail for.  None of those attempts actually succeeded, of course."

It is also worth noting that even a decreipt the Pentium III 800MHz (since replaced with a Pentium 4 box, donations of more recent hardware gratefully accepted) at the end of the unexciting DSL line to my house has been able to handle about 190 simultaneous connections from TRAPPED addresses without breaking into a sweat. For some odd reason, the number of simultaneous connection a the other sites I manage with better bandwidth have not been as high as the ones from my home gateway.

During the months I've been running the trapping experiment, the number of spamtrap addresses in the published list has grown to more than 10,000 addresses (by May 4th, 2013, the list had grown to 24431 entries). Oddly enough, my greylist scans still show up a few more every few days.

Meanwhile, my users report that spam in their mailboxes is essentially non-existent. On the other side of the fence, there are indications that it may have dawned on some of the spammers that generating random addresses in other people's domains might end up poisoning their own well, so they started introducing patterns to be able to weed out their own made up addresses from their lists. I take that as a confirmation that our harvesting and republishing efforts have been working rather well.

The method they use is to put some recognizable pattern into the addresses they generate. One such pattern is to take the victim domain name, prepend "dw" and append "m" to make up the local part and then append the domain, so starting from sia.com we get dwsiam@sia.com.

There is one other common variation on that theme, where the prepend string is "lin" and the append string is "met", producing addresses like linhrimet@hri.de. Then again when they use that new, very recognizable, address to try to spam my spamtrap address malseeinvmk@bsdly.net, another set of recognition mechanisms are activated, and the sending machine is quietly added to my spamd-greytrap. (We've since seen other patterns come and go, scanning the list at http://www.bsdly.net/~peter/traplist.shtml will see examples of them all).

And finally, there are clear indications that spammers use slightly defective relay checkers that tend to conclude that a properly configured spamd is an open relay, swelling my greylists temporarily. We already know that the spammers do not use From: addresses they actually receive mail for, and consequently they will never know that those messages were in fact never delivered.

If you've read this far and you're still having fun, you can find other anecdotes I would have had a hard time believing myself a short time back in my field notes at . By the time the magazine has been printed and distributed (or by the time you find this revised article online), there might even be another few tall tales there.

You might also want to read

The Book of PF, 2nd Edition, by Peter N. M. Hansteen, No Starch Press November 2010 (covers both pre-4.7 and post-4.7 syntax), available in better bookshops or from the publisher

The Next Step in the Spam Control War: Greylisting, by Evan Harris. Available at http://greylisting.org/articles/whitepaper.shtml

Maintaining A Publicly Available Blacklist - Mechanisms And Principles, April 14, 2013 describes the maintenance regime for the published version of my spamd-greytrap list

In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play - A Full Recipe, May 28, 2012, offers another, more OpenBSD-centric, recipe for setting up a spamd based system.

This article originally appeared in BSD Magazine #2, June 2008. This re-publication has suffered only minor updates and edits.

If you're interested in OpenBSD in general, you have a real treat coming up in the form of Michael W. Lucas' Absolute OpenBSD, 2nd edition. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned, with a preview planned for the BLUG meeting a couple of weeks before the conference. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people.

### Martin Pihl: Magento SPECIALIST wanted

maj 2, 2013 - 16:20

Vi er i FULD gang med at udvikle en Magento-løsning til at erstatte den nuværende webshop, som kommer til at blive en af skandinaviens største Magento-baserede webshops, når vi har migreret alle lande.

Vi søger derfor en person til en kort projektansættelse på 3 måneder. Vi kan ikke love, at det vil blive til en fuldtidsansættelse bagefter, men i det mindste kan du blive en del af et helt vildt spændende projekt.

Du skal virkelig kende Magento som din egen baglomme; du behøver ikke være programmør, men du skal kunne forstå det tekniske.

Kontakt Claus på chy@bonaparte.dk

Tidsfrist er NU, og du vil kunne starte I MORGEN!

### Peter Hansteen: You've Installed It. Now What? Packages!

maj 2, 2013 - 02:05
Once you've installed your OpenBSD system, packages are there to make your life easier. A works for me/life is good guide for your weekend reading.

Installing OpenBSD is easy, and takes you maybe 20 minutes. Most articles and guides you find out there will urge you to take a look at the files in /etc/ and explore the man pages to make the system do what you want. With a modern BSD, the base system is full featured enough that you can in fact get a lot done right away just by editing the relevant files and perhaps starting or restarting one or more services. If all you want to do is set up something like a gateway for your network with basic-to-advanced packet filtering, everything you need is already there in the basic install.

Then again, all the world is not a firewall, and it is likely you will want to use, for example, a web browser other than the venerable lynx or editing tools that are not vi or mg. That's where packages and package systems come in. I'll skip a little ahead of myself and make a confession: The machine I'm writing this piece on reports that it has some 381 packages installed.

Before we move on to the guts of this article, some ceremonial words of advice: If you're new to OpenBSD or it's your first time in a while on a freshly installed system, you could do a lot worse than spending a few minutes reading man afterboot. That man page serves as a handy checklist of things you should at least take a peek at to ensure that your system is in good working order.

Some packages will write important information, such as strings or stanzas to put in your rc.conf.local, rc.local or sysctl.conf files, to your terminal. If you're not totally confident what to do after the package install finishes, it may be a good idea to run your ports and packages installs in a script(1) session. See man script for details.

When dinosaurs roamed the Earth ...
The story of the ports and packages goes back to the early days of free software when we finally found ourselves with complete operating systems that were free and hackers^H^H^H^H^H^H system administrators found that even with full featured operating systems such as the BSDs, there were sometimes things you would want to do that was not already in there.

The way to get that something else was usually to fetch the source code, see if it would compile, make some changes (or a lot) to make it compile, possibly introduce the odd #ifdef block and keep at it until the software would compile, install and run. In the process you most likely found out what, if any, other software (tools or libraries) needed to be installed to complete the process. At that point, you could claim to have ported the software to your platform. If you had been careful and saved a copy of the original source files somewhere, you could use the diff(1) utility to create a patch you could then send to the program maintainer and hope that he or she would then incorporate your changes in the next release.

But then, why wait for the next release? Why not share those diffs with others? How about putting it into a CVS repository that would be available to everyone? That idea was tossed around on relevant mailing lists for a while, and the first version of the ports system appeared in FreeBSD 1.0 in December 1993.

The other BSD systems adopted the basic idea and framework soon after, with small variations. On NetBSD, the term port was already in use for ports of the operating system itself to specific hardware platforms, so on that operating system, the ports tree is referred to as 'package source', or pkgsrc for short. The ports and packages tools are still actively maintained and developed on all BSDs, and most notably Marc Espie rewrote the pkg_* tools for OpenBSD's 3.5 release. Marc and other OpenBSD developers have been refining the package tools with every release since then.

Parallel development has lead to some differences in the package handling on the various BSDs, and some of the operations I describe here from an OpenBSD perspective may not be identical on other operating systems.

Around the same time the BSDs started including a ports tree and packages, people on the Linux side of the fence started developing package systems too. With distributed development taken to the point where the kernel, basic system tools and libraries are maintained separately, perhaps the need there was even greater than on the BSDs.

In fact, some Linux distributions such as the Debian based ones have taken the package management to the point where 'everything is a package' - every component on a running system is a package that is maintained via the package system, including basic system tools, libraries and the operating system kernel. In contrast, the BSDs tend to treat the base system as a whole, with the package management tools intended solely for managing software that does not come as a part of the default install.

The anatomy of ports and packages
The ports system consists of a set of 'recipes' to build third party software to run on your system. Each port supplies its own Makefile, whatever patches are needed in order to make the software build ande optionally package message files with information that will be displayed when the software has been installed.

So to build and install a piece of software using the ports system, you follow a slightly different procedure than the classical fetch - patch - compile cycle. You will need to install the ports tree, either by unpacking ports.tar.gz from your CD set or by checking out an updated version via cvs. With a populated ports tree in hand, you can go to the port's directory, say

$cd /usr/ports/misc screen to see about installing screen, the popular GNU multi-screen window manager. On a typical OpenBSD system, that directory contains the following files:$ls -ltotal 20
drwxr-xr-x 2 root wheel 512 Mar 31 16:46 CVS
-rw-r--r-- 1 root wheel 1047 Mar 28 17:34 Makefile
-rw-r--r-- 1 root wheel 283 Apr 5 2007 distinfo
drwxr-xr-x 3 root wheel 512 Jun 26 2012 patches
drwxr-xr-x 3 root wheel 512 Mar 11 2012 pkg

here, the Makefile is the main player. If you open it now in a text editor or viewer such as less, you will see that the syntax is quite straightforward. What it does is mainly to define a number of variables such as the package name, where to fetch the necessary source files, which programs are required for the compile to succeed and which libraries the resulting program will need to have present in order to run correctly. The file defines a few other variables too, and you can look up the exact meaning of each in the man pages, starting with man ports and man bsd.port.mk.
With all relevant variables set, at the very end the file uses the line

.include <bsd.port.mk>

to pull in the common infrastructure it shares with all other ports. This is what makes the common targets work, so for example, typing

$sudo make install (probably the most common port-related make command for end users and administrators) in the port directory will start the process to install the software. But before you type that command and press Enter, you may want to consider this: This command will generate a lot of output, most likely more than will fit in the terminal's buffer. If the build fails, it is likely that the message about the first thing that went wrong will have scrolled off the top of your screen and out of the terminal buffer. For that reason, it is good sysadmin practice to create a record of lengthy operations such as building a port by using the script command. Typing script in a shell will give you a subshell where everything displayed on the screen will be saved in a file. Escape sequences, asterisk-style progress bars and 'twirling batons' will end up a bit garbled, but that essential message you are looking for will be there too. man script will give you the details, and unless you're an incurable packrat, do remember to delete the typescript file afterwards. That process will start with checking dependencies, go on with downloading the source archive and checking that the fetched file matches the cryptographic signatures stored in the distinfo file. If the signatures match, the source code is extracted to a working directory, the patches from the patches/ directory are applied, and the compilation starts. If the dependency check finds that one or more pieces are missing, you will see that the process fetches, configures and installs the required package before continuing with the build process for the original package. After a while, the package build most likely succeeds and the install completes. At this point you will have a new piece of software installed on your system. You should be able to run the program, and the installed package will turn up in the package listings output by pkg_info, such as$ pkg_info | grep screen
screen-4.0.3p3 multi-screen window manager

This information is taken from the package's subdirectory in /var/db/pkg, where the information about currently installed packages is stored.

If you paid close attention during the make install process, you may have noticed that the install step was performed from a binary package. This is one of the distinctive features of the OpenBSD version of the package system. The package build always generates an installable package based on a 'fake' install to a private directory, and software is always installed on the target system from a package. And now we should mention that on a typical modern OpenBSD system, you wouldn't want to install GNU Screen at all. Since the OpenBSD 4.6 release, equivalent (or better!) functionality has been included in the OpenBSD base system via tmux(1).

But you don't need to do that!

This means several things. If you have built and installed a package by typing make install in the relevant ports directory and later run the make deinstall or pkg_delete to remove the software, any subsequent install of the software will take place from the package file stored in a subdirectory of /usr/ports/packages.

But more importantly, in most cases you can keep your system's packages up to date without a ports tree on the machine. The main exceptions to the rule that precompiled packages are available from the mirrors are software with licenses that do not allow redistribution or require the end user to do specific things such as go to a web site and click a specific button to formally accept a set of conditions. In those cases it cant' be helped, and you will need to go via the ports system to create a package locally and install that.

For each release, a full set of packages is built and made available on the OpenBSD mirrors, and by the time you read this, there is reason to hope that running updates to -stable packages will be available for supported releases too.
The way to make good use of this is to set the PKG_PATH variable to include the packages directory for your release on one or more mirrors close to you and/or a local directory, and then run pkg_add with the -u flag.

My laptop runs -current and I'm based in Europe, so the PKG_PATH is set to
PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/snapshots/packages/uname -m/

On a more conservatively run system, you may want to set it to something like
PKG_PATH=ftp://ftp.eu.openbsd.org/pub/OpenBSD/uname -r/packages/uname -m/

If you want to find out what packages are available at your favorite mirror, you can get a listing of package names by fetching the file $PKG_PATH/index.txt. Another nice resource is openports.se, which offers a nice clickable interface. Once your PKG_PATH is set to something sensible, you can use pkg_add and the package base name to install packages, so a simple$ sudo pkg_add screen

would achieve the same thing as the 'make install' command earlier (minus the lengthy compilations, and still assuming that you would want to install the package instead of getting to know tmux(1), which is included in the base system.), and most likely a lot faster too.

Once you have a set of packages installed, and keeping in mind that you need a meaningful PKG_PATH, you can keep them up to date using pkg_add -u. If you want more detailed information about the package update process and want pkg_add to switch to interactive mode when necessary, you can use something like this command:
$sudo pkg_add -vui I have at times tended to run my pkg_add -u with some of the -F flags in order to force resolution of certain types of conflict, but given the quality of the work that goes into the packages, most of the -F options are rarely needed. pkg_add and its siblings in the pkg_* tools collection has a number of options we have not covered here, all intended to make your package management on OpenBSD as comfortable and flexible as possible. The tools come with readable man pages, and may very well be the topic of future articles. You should also be aware that Michael W Lucas's Absolute OpenBSD, 2nd Edition is about to be released (already available as an ebook), with a more in-depth treatment of the package system than what I've presented here. Look at the end of the article for further links. How do I make a package then? That is a large question, and the first question you should ask if you think you want to port a particular piece of software is, "Has this already been ported?". There are several ways to check. If you are thinking of creating a port, you most likely already have the ports tree installed, so using the ports infrastructure's search infrastructure is the obvious first step. Simply go to the /usr/ports directory and run the command$ make search key=mykeyword

where mykeyword is a program name or keyword related to the software you are looking for. One other option with even more flexible search possibilities is to install databases/sqlports. And of course, searching the ports mailing list archives (http://marc.info/?l=openbsd-ports) or asking the mailing list works too. When you have determined that the software you want to port is not already available as a package, you can go on to prepare for the porting effort. Porting and package making is the subject of much usenet folklore and rumor, but in addition you have several man pages with specific information on how to proceed. These are, ports(7), package(5), packages(7), packages-specs(7), library-specs(7)and bsd.port.mk(5).

Read those and use your familiarity with the code you are about to port to find your way. The OpenBSD web offers a quite a bit of information too. You could start with re-reading the main ports and packages page at http://www.openbsd.org/faq/faq15.html, and follow up with the pages about the porting process at http://www.openbsd.org/porting.html, testing the port at http://www.openbsd.org/porttest.html and finally the checklist for a sound port at http://www.openbsd.org/checklist.html.

All the while, try first to figure out the solution to any problems that pop up, read the supplied documentation, and only then ask port maintainers via the ports mailing list for help. Port maintainers are generally quite busy, but if you show signs of having done your homework first, there is no better resource available for helping you succeed in your porting or port maintenance efforts.

One fine resource for the aspiring porter is Bernd Ahlers' ports tutorial from OpenCon 2007 (hm. doesn't that need a refresh?), you can look up Bernd's slides at http://www.openbsd.org/papers/opencon07-portstutorial/index.html, and it is possible he can be persuaded to repeat the tutorial at a conference near you. And for some recent advances in the OpenBSD ports and packages system, see Marc Espie's EuroBSDCon 2012 presentation Advances in packages and ports in OpenBSD.

The main source of information about the OpenBSD ports and packages system is to be found on the OpenBSD project's web site. The FAQ's ports and packages section at http://www.openbsd.org/faq/faq15.html has more information about all the issues covered in this article, and goes into somewhat more detail than space allows here. If you encounter problems while installing or managing your packages, it is more than likely that you will find a solution or a good explanation there. And of course, if nothing else works or you can't figure it out, there is always the option of asking the good people at misc@openbsd.org or ports@openbsd.org (do read the OpenBSD Mailing Lists page before just butting in) or search the corresponding mailing list archives.

An earlier version of this article appeared in BSD Magazine 2/2008. You can now also find this updated version featured at OpenBSD Journal (aka undeadly.org), the primary OpenBSD news site.

If you're interested in OpenBSD in general, you have a real treat coming up in the form of Michael W. Lucas' Absolute OpenBSD, 2nd edition. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned, with a preview planned for the BLUG meeting a couple of weeks before the conference. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people.