This site is dedicated to all things I see and that I think are interesting to publish almost always are other very smart people.

Archive for November, 2010

Benefits use CCS

A demonstration of what can be accomplished visually through CSS-based design. Select any style sheet from the list to load it into this page.

This is a learning exercise as well as a demonstration. You retain full copyright on your graphics (with limited exceptions, see submission guidelines), but we ask you release your CSS under a Creative Commons license identical to the one on this site so that others may learn from your work.

http://www.csszengarden.com/


Primary is a simple…

Primary is a simple CSS Framework, designed for Developers and Designers in order to make using CSS as easy as possible.

Primary is an experiment based on the concepts of legendary comic book artist Wally Wood’s22 Panels That Always Work“.

http://www.primarycss.com/

Twitter:
@PrimaryCSS
Email:
Trench@PrimaryCSS.com


CSS Chapter 2

As we explained in the previous chapter, HTML elements enable Web page designers to mark up a document as to its structure. The HTML specification lists guidelines on how browsers should display these elements. For example, you can be reasonably sure that the contents of a strong element will be displayed bold-faced. Also, you can pretty much trust that most browsers will display the content of an h1 element using a big font size… at least bigger than the p element and bigger than the h2 element. But beyond trust and hope, you don’t have any control over how your text appears.

CSS changes that. CSS puts the designer in the driver’s seat. We devote much of the rest of this book to explaining what you can do with CSS. In this chapter, we begin by introducing you to the basics of how to write style sheets and how CSS and HTML work together to describe both the structure and appearance of your document.

This is chapter 2 of the book Cascading Style Sheets, designing for the Web, by Håkon Wium Lie and Bert Bos (2nd edition, 1999, Addison Wesley, ISBN 0-201-59625-3)

tip:

To start using CSS, you don’t even have to write style sheets. Chapter 16 will tell you how to point to existing style sheets on the Web.


Introducción – Tutorial CSS gratuito

CSS es un lenguaje de estilo que define la presentación de los documentos HTML

Las hojas de estilo en cascada (CSS, acrónimo de Cascading StyleSheets) son una herramienta fantástica para añadir presentación a los sitios web. Pueden ahorrarte mucho tiempo y te permitirán diseñar sitios web de un modo totalmente nuevo. CSS es imprescindible para todos aquellos que trabajen en el campo del diseño web.

El presente tutorial te iniciará en el uso de CSS en solo unas horas. Es fácil de entender y te enseñará todo tipo de técnicas sofisticadas.

Aprender CSS es divertido. Según vayas avanzando en el contenido del tutorial, recuerda dedicar el tiempo suficiente para experimentar con lo que hayas aprendido en cada lección.

Para usar CSS es necesario tener una experiencia básica en HTML.. Si no estás familiarizado con HTML, empieza, por favor, con nuestro tutorial HTML antes de adentrarte en CSS.

¿Qué software necesito?

Evita, por favor, usar software tal como FrontPage, DreamWeaver o Word con este tutorial. Este tipo de software tan sofisticado no te ayudará a aprender CSS; más bien, te limitará y reducirá de modo significativo tu curva de aprendizaje.

Todo lo que necesitas es un editor de texto sencillo y gratuito.

Por ejemplo, Microsoft Windows incorpora un programa que se llama Bloc de notas. Se localiza normalmente en el menú de Inicio, sección Todos los programas dentro de la carpeta Accesorios. Si no es éste, puedes usar un editor de texto parecido, por ejemplo, Pico para Linux o Simple Text para Macintosh.

Este tipo de editores de texto sencillos son ideales para aprender HTML y CSS puesto que no afectan o cambian el código que vas tecleando. De esto modo, los éxitos y errores sólo se te podrán atribuir a ti… y no al software.

Puedes usar cualquier navegador con este tutorial. Te animamos a que mantegas siempre actualizado tu navegador y uses la versión más reciente.

Un navegador y un sencillo editor de texto es todo lo que necesitas.

¡Nos ponemos en marcha!

http://es.html.net/


The history of a model for fonts on the Web

W3C

Bert Bos | History of WebFonts

To view: switch to full screen

(With Javascript: press A)

Bert Bos (W3C)
SXSW conference
Austin, TX, USA
March 13, 2010

1994–1996: before WebFonts

< Oct 1994:
User style sheets

Oct 1994:
Author-reader balance
→ font problem

font download?
font synthesis?
most similar?

CSS level 1 (1996):
Font set
Generic families
(serif, sans-serif…)

In 1994, I was working on a browser (optimized to integrate the kind of information sources that scholars in the humanities use). It handled a subset of SGML, including HTML, and it had style sheets. The style sheets had many features that are still in CSS, and, of course, it specified fonts. But there was no particular difficulty with that, because the style sheets were strictly for use by the reader. A reader presumably wouldn’t specify a font he didn’t actually have.

Then in October 1994, Håkon Lie posted his proposal for style sheets. There were two or three others than ours, but his caught my eye in particular for one feature: it postulated a (yet to be specified) algorithm to balance the author’s desired style against the reader’s.

We combined our ideas and the result was Cascading Style Sheets. In syntax it doesn’t resemble either of our two style sheet languages, but most of the features are still there. But now we had a problem with fonts…

If you negotiate, say, the author’s request for red text and the reader’s preference for blue text, you may end up with red, blue, or even something in between, but it is still a color and technically you can draw it. (The aesthetics is another matter…)

But if you combine the reader’s choice of font A with the authors preference for B, and the algorithm yields B, it may well be that font B isn’t actually available. We considered several solutions, but most of them could be dismissed immediately.

We thought about downloading (embedding) the font, but (1) fonts are big and even on the academic networks of the time you wouldn’t want to wait for a font to download; and (2) there was no common font format.

We thought about font synthesis as well: pass something like a PANOSE number and create a font with those characteristics on the fly. But such a font is likely to be so ugly that both author and reader would prefer some other font instead.

We thought about finding the most similar font, again based on something like PANOSE numbers, but that would probably also lead to a font that neither author nor reader would have chosen.

And in any case, we couldn’t make the style sheet language too complicated, because it had yet to be adopted…

And so we settled on a solution in two parts:

  1. The concept of a font set, i.e., the author can specify a list of fonts in order of preference in the hope that one of them at least is available on the reader’s machine.
  2. Five generic font family names (serif, sans-serif, monospace…) so that an author could specify that, if none of his fonts was available, he at least wanted a serif font.

That became a standard (W3C Recommendation) as CSS level 1, in 1996.

1997–1998: WebFonts!

April 1996
First Fonts WG

synthesis
PANOSE
font metrics

download & (virtual) embedding
any format
type-1, truetype, eot…

CSS level 2 (1998)
@font-face

While CSS level 1 was going through its final reviews, in 1996, we started working on CSS level 2. CSS was well received and we thought we could add a number of features that we wanted, but that had been too advanced for level 1, including font download.

We created a working group, the first incarnation of the Fonts Working group, and asked Adobe to explain what they had done for PDF. With their help, but also Bitstream, Microsoft and others, we created WebFonts, also known as the ‘@font-face’ rule of CSS.

The group’s results were integrated into CSS level 2 and allowed not only font download, but also synthesis of a font with the same metrics (primarily meant to bridge the time while waiting for the font to download).

There still wasn’t a clear clear common font format for all platforms. So we didn’t recommend a single format, but provided an open-ended list that included, among others, Type-1, OpenType and EOT.

CSS level2 became a standard in 1998, and then we waited to see what would happen.

Microsoft implemented the download feature straightaway, using their own EOT format. Which made sense, because EOT seemed to have tackled the copyright problem. (Of the handful of free fonts that existed at the time very few were any good.)

In the context of the browser wars that raged at the time, Netscape decided to implement something completely different. They used a technology from Bitstream called PFR (Portable Font Resource) to try and offer similar capabilities, but without using the WebFonts framework. The result wasn’t very good and distributing commercial fonts as PFR was probably illegal in many countries, too. Now, in 2010, PFR as such still exists, but it is no longer supported on the Web.

And for a long time, nothing else happened…

2006–2008: need for a format

2006: Image Replacement Techn.
CSS WG decides to act

Simplify WebFonts?
Enhance ‘content’ prop.?
Standardize font format?

Formats:
SVG?
OpenType?
OpenType in zip file?

2008: Microsoft & Monotype submit EOT
Turns out to be easy to implement…

EOT, the only format available with WebFonts in practice, was in use in some parts of the world, especially for languages for which there were few fonts available, but it couldn’t be called a big success. Designers rarely used it and no other browsers than Microsoft’s Internet Explorer had implemented it.

Then, at one of its meetings in 2006, the CSS working group discussed a phenomenon that was clearly on the rise on the Web: image replacement techniques.

When CSS was started in 1994, the reason wasn’t just to make Web documents more beautiful, it was also to provide an alternative for practices that went against our goal of a semantic Web: <FONT> tags, images instead of text, spacer elements, etc. We wanted HTML documents to be accessible, device-independent, easy to maintain, and re-usable, and so we tried to make the separation of text/structure from style as easy and attractive as possible.

The image replacement techniques were not quite as bad as the original practise of putting an image in an HTML document. In their case the text was still in the document and the images only in the style sheet, but they still led to accessibility problems and documents that were hard to maintain. E.g., you cannot cut and paste text when the text is replaced by an image.

We, the CSS working group, decided that the designers were clearly demanding better font support. We could improve some small things in CSS here and there, but the main job seemed to be to get WebFonts (the ‘@font-face’) adopted. We decided to enhance the ‘content’ property and simplify WebFonts by removing the font synthesis part, and we discussed formats: what formats would lead to the most supply?

W3C now had a graphics format, SVG, that included a way to define fonts. We expected (and it seems we were right) that SVG would eventually come to be integrated with Web documents. But SVG fonts were not a sufficient answer. They lack advanced features that you find in OpenType, e.g. And they don’t solve the problem of commercial fonts, which you cannot just put on the Web for everybody to copy.

Because, although by now the number of free fonts had increased, it seemed designers wanted to use commercial fonts, such as the ones that come with Adobe Illustrator.

EOT promised to solve that problem. It had apparently both the full power of OpenType and a solution for distributing a font without losing the information about its license. Independently of each other, both Håkon Lie and I challenged Microsoft to open up the format and show how it could solve our WebFonts problem.

They did. The person who made it all happen was Paul Nelson, one of Microsoft’s representatives in the CSS WG at the time. He got the right people to sign off on the submission of EOT to W3C, and he found and cleaned-up the EOT documentation. But from the speed with which he got the approval I suspect Microsoft had already started thinking about opening EOT before we asked.

Paul even managed to convince Monotype to join W3C and submit their patented compression algorithm MicroType Express, which is used in EOT. Submitting a technology to W3C implies that it becomes Royalty Free if it becomes a W3C standard.

There is some administrative work involved in preparing a document in the right format for publication as a W3C submission, but in March 2008 EOT and MicroType Express were published as a joint submission by Microsoft and Monotype.

Based on what EOT was supposed to do, I had imagined what it would look like on the inside. But I was afraid that it wouldn’t be like that at all, because experience shows that document formats that come out of Microsoft aren’t always as elegantly designed as one might wish. But it turned out to be everything I had hoped for.

EOT is simple and straightforward. It has one or two things that are redundant, but not harmful, and easily removed. It has a tiny bias towards Windows, but also nothing that can be fixed easily.

To prove to myself that it is indeed as easy to write software for as I thought, I recently did just that. It took me just one day to write a program to read and another to write EOT files. On a Friday night I wrote eotinfo, to decode an EOT header and display its contents in a readable way; and on the Saturday I wrote mkeot, which creates and EOT file. And so far they work perfectly.

So, in March 2008 I thought we were nearly done. EOT didn’t need much work to be made into a proper W3C Recommendation. We just needed to check that the font industry was indeed willing to use EOT, as Microsoft believed. There were indications that that was indeed the case.

2008: talks w/ font industry

What do designers want?

What does the font industry want?
W3C talks to many people

A few want DRM…
…but as long as EOT becomes a standard they will accept that, too

End of 2008: EOT is looking good!

During the summer and autumn of 2008, I talked to many people in the font industry: font makers, font vendors, makers of font software, and designers. I met big companies (Monotype/Linotype, Adobe) but also small and tiny little companies. And some of my colleagues at W3C did the same.

The conclusion was that there was a range of preferences: from people who wanted EOT to people who would rather have something with DRM, but who were at the same time realistic enough to see that DRM is not in the current zeitgeist and were willing to accept EOT, once it was a standard.

What made EOT acceptable, even though it does nothing to make copying difficult, is that it at least makes doing the right thing easy. Anybody who wants to redistribute a font that comes as an EOT can easily see what the license is. The core of the license is even machine-readable, so you don’t have to know English.

In other words, by the end of 2008, things looked good. And I already calculated: six month to rewrite the EOT specification, fix the ambiguities, and add the conformance requirements; six month more to create a test suite, make the first implementations and test them, and by the end of 2009 we would have a standard for font embedding on the Web and our old WebFonts would finally come into its own.

I was wrong.

2009: talks w/ browsers

Opposition from browser makers

Compression costly
Usability of EOT vs raw OpenType…

DMCA in the US
Can you exclude that somebody considers EOT to be DRM?

I hadn’t thought the other browser makers, in particular Opera, Mozilla and Apple, could be opposed to EOT. They were already showing interest in WebFonts and EOT looked easy enough to implement. We could talk about dropping some optional parts, such as the compression, but those were details, weren’t they?

In reality, when W3C proposed to resurrect the Fonts Working Group in order to standardize the next version of EOT, the three browser makers protested violently.

There were arguments about the usefulness of the special compression. Was it worth adding new code to the browser for a compression that was better, but not an order of magnitude better than the compression that the Web already used, viz., gzip?

There were arguments about usability: even though authors are used to adapting all resources for the Web (from Word to HTML, from TIFF to PNG, from 10 megapixels to less than 1, etc.) if they could just use TrueType and OpenType, instead of EOT, that would avoid one step.

But the argument that wouldn’t go away after discussions was the DMCA. The DMCA is a law in the US that is meant to better protect copyrighted works and one of the things it does is to make it a crime to circumvent an effective technical measure that is designed to make copying difficult. In other words, circumventing DRM.

The scenario is as follows: imagine somebody implements EOT according to the standard. That means that the software looks in the EOT header for one or more URLs and does a string comparison of those URLs against the URLs of document it is about to render. If one of the URLs matches, the font can be used, otherwise not. Parsing the EOT header is a few dozen lines of code and the comparison itself is two lines.

Now imagine somebody else takes that software and removes the two lines that compare URLs. The result is a program that applies fonts in all cases, against the standard and, in some cases, against the licence of the font.

If you can find a judge who is willing to claim that those two lines constitute an effective DRM and that removing them is tantamount to circumventing it, then you can claim that the author of the original software is an accomplice, because he provided most of the code.

It’s an unlikely scenario, but it is apparently enough to stop any attempt to protect copyright as long as the DMCA exist. Software makers simply do not want copyright information that can be read by software. Because when it cannot be read, you cannot be liable for not reading it.

2010: WOFF

Need a format that:

contains all of OpenType
(even future versions)
but is not OpenType

no license info
requires little code
not much less efficient than EOT

WebOpenFontFormat
like compressed OT with new header

There were ideas for an “EOT light,” a version of EOT without license information, with the idea that the fact that it was a special Web font and not normal OpenType would at least tell people that the font wasn’t meant to be copied and installed locally. But using EOT for that was also confusing, given that existing software already offered more features.

So we settled on a new format, called WOFF, that was proposed by Mozilla. Just like EOT, it contains a full OpenType file and thus keeps all the advanced typographic features, but the OpenType file is embedded in the WOFF file differently than in an EOT file, and the file is always compressed with gzip.

W3C now proposes that the new Fonts Working Group makes a standard from WOFF. At this moment, March 13, 2010, the official review period is over since a few hours, but the results aren’t known yet. They are expected around April 1.

I’m still optimistic and so I already invite all of you to join the effort, starting in April. If you’re a company, you can join W3C and become a member of the working group. We’ll need people to help with editing and testing. You can also join the public mailing list <www-font@w3.org> and help us develop the specification and promote the result. Joining the public list is open to everybody and implies no special commitments.

2011-…: WebFonts for real?

A new Fonts WG is under review

We should know more by April

In any case:
join us on
<www-font@w3.org>!


Cascading Style Sheets

Partial CSS software list

Nearly all browsers nowadays support CSS and many other applications do, too. To write CSS, you don’t need more than a text editor, but there are many tools available that make it even easier.

Of course, nearly all software has bugs. And some programs are further ahead implementing the latest CSS modules than others. Various sites describe bugs and work-arounds.


Las mejores pantallas táctiles del mercado

pantalla tactiles Las mejores pantallas táctiles del mercado

Se ha hecho una comparación entre las mejores pantallas táctiles de los mejores móviles de la actualidad HTC Desire HD, LG Optimus E900, Sony Ericsson X10, Nokia N8, iPhone 4, Nokia C6-01, Samsung Wave y Samsung Galaxy S. Esto es comparar, básicamente, pantallas LCD vs AMOLED vs IPS LCD vs Super Clear LCD, vs CBD AMOLED vs Super AMOLED, lo mejor que tenemos disponible en la actualidad. Veamos el cuadro en detalle para poder comparar:

 

gsmarena 004 468x468 Las mejores pantallas táctiles del mercado

En el gráfico podemos apreciar perfectamente las diferencias de color/contraste entre todas las principales pantallas, parecería que el Nokia C6-01 tiene una pequeña ventaja por sobre el resto, pero realmente hay cosas muy buenas de todos, incluso una amplia diferencia si lo comparamos con el tan promocionado Nokia N8, con lo que la tecnología CBD parece estar ganando la pulseada ante los AMOLED y Super AMOLED, y ya sabemos de sobra las cualidades de la pantalla del iPhone 4.

Si todavía no nos convencimos, les dejo un gráfico que lo explica más detalladamente para que veamos las principales diferencias:

comparacion pantallas Las mejores pantallas táctiles del mercado

Podemos ver cada modelo y qué tipo de pantalla tiene cada uno, y según los testeos que se han estado haciendo, qué tan bien reacciona cada una de ellas en las diferentes condiciones, ya sea viéndolos de perfil, de frente, con y sin luz y el promedio de rendimiento del color.

Una vez más, todos los datos para que cada quien elija cuál le parece que es su móvil ideal.

Vía mynokiablog


Manifesto for a Neutral Red

red neutral Manifiesto por una Red Neutral
Manifesto for a neutral network Neutral Red

Citizens and businesses using Internet text attached to this state:

1. Internet is a Neutral Red by design, from its inception to its current implementation, in which information flows freely, without discrimination based on origin, destination, protocol, or content.
2. That businesses, entrepreneurs and Internet users have been able to create services and products in the Neutral Red without authorization or prior agreements, resulting in a virtually non-existent barrier to entry has allowed the explosion of creativity, innovation and service that defines the current network status.
3. All users, entrepreneurs and Internet companies have been able to define and offer their services on an equal footing taking the concept of competition to an extent previously unknown.
4. Internet is the vehicle of free speech, freedom of information and most important social development which have citizens and businesses. His nature must not be jeopardized in any way.
5. In order to bring the Neutral Network operators must carry packets of data in a neutral manner without being a “customs” of traffic and no advantage or disadvantage to some content over others.
6. That traffic management in specific situations and exceptional network overload should be undertaken in a transparent manner according to standard criteria of public interest and non-discriminatory and non-commercial.
7. That restriction outstanding traffic by operators can not become a sustainable alternative to network investment.
8. Neutral Red that this is threatened by operators interested in reaching commercial agreements which favors or degrade the content according to their relationship with the operator.
9. Some market participants want to “redefine” the Neutral Red to be handled in accordance with their interests, and that claim must be avoided, the definition of the fundamental rules of operation of the Internet should be based on the interest of those who use it, not those the supply.
10. That the response to this threat to the network will not be inaction, doing nothing amounts to allowing private interests to conduct de facto practices affecting citizens’ fundamental freedoms and the ability of firms to compete on equal conditions.
11. It is necessary and urgent urge the Government to protect a clear and unambiguous Neutral Network, in order to protect the value of Internet towards developing a more productive, modern, efficient and free of interference and interferences. This requires that any motion to approve inextricably linking the definition of Neutral Red in the content of future legislation that promotes and does not condition its application to issues that have little to do with it.

The Neutral Red is a clearly defined concept in academia, where it stirs debate: citizens and businesses have the right to traffic received or generated data is not manipulated, distorted, impeded, diverted or delayed in accordance with priority the type of content, protocol or application used, the origin or destination of the communication or any other circumstances unrelated to their will. That traffic is treated as a private communication and only under court order may be monitored, traced, stored or analyzed in its content, such as private correspondence that it really is.

Europe and Spain in particular, are in the midst of an economic crisis so important to force the reversal of its production model, and better use of the creativity of its citizens. Neutral Network is crucial to preserve an ecosystem that encourages competition and innovation for the creation of numerous products and services that are to invent and discover. The ability to network, so collaborative, and connected markets, will affect all sectors and all companies in our country, making the Internet a key factor in our current and future economic and social development, determining great extent the level of competitiveness. Hence our deep concern for the preservation of the Neutral Network. We therefore urge the Spanish Government urgently to be proactive in the European context and to legislate in a clear and unambiguous in this regard.

If you feel represented by this manifesto strongly urge you copy it and post it to your blog or mention it in your Twitter or Facebook using the hashtag # redneutral. Thank you!

Evidenced by a written Neutral Red Bitelia November 30, 2010 by earcos
Send to Twitter | Share on Facebook


¿Por qué los astronautas pierden a veces sus uñas?

Salir al espacio exterior entraña sus riesgos, y también tiene aparejados una serie de efectos secundarios que todos conocemos bastante bien: pérdida de masa muscular, etc. Sin embargo, hay un efecto secundario muy poco divulgado: la pérdida de las uñas.

¿A qué se debe esto? La razón no podría ser más pueril, y ha sido aireada recientemente gracias a un estudio de Dava Newman, profesora de aeronáutica del MIT.

Toda la culpa la tienen los guantes reglamentarios que se emplean en los paseos espaciales. Para simular la presión que hay en la Tierra, estos guantes presentan una textura rígida, una suerte de dedales en su interior. El simple roce con ellos acostumbra a rasgar las uñas, y en muchos casos se desprenden por completo.

Según datos de 2002 a 2004, el 47% de las más de 350 lesiones registradas entre los astronautas fueron heridas en las manos. Más de la mitad eran daños en las uñas.

Actualmente, Newman está trabajando para diseñar guantes que combinen protección y flexibilidad. Para Newman, la solución pasará por un cambio de concepto que implique cambiar los actuales trajes presurizados por aire por otros pegados al cuerpo, o bien usar guantes robóticos que ayuden a los astronautas a usar las manos sin tener que dejarse las uñas en el intento.

Vía | madrimasd


El día que los nazis llegaron a Canadá

Weather_Station_Kurt

Tanto en la costa este, como en la opuesta, y a lo largo de los Estados Unidos, se cuentan muchas viejas historias acerca de agentes nazis infiltrados, espías que entraban en el país partiendo de submarinos. En la costa oeste, como no podía ser de otro modo, sobre todo se trata de historias acerca de una invasión japonesa. Algunas de esas narraciones tienen que ver con oscuros episodios reales que tuvieron lugar a lo largo de la Segunda Guerra Mundial. En Canadá sucede lo mismo y, como en el caso que hoy me ocupa, algunos sucesos bien pudieron haber servido de semillas para leyendas contemporáneas sobre invasores de ultramar aunque, según parece, esta aventura permaneció en el olvido durante décadas.

Esta historia comienza el 18 de septiembre de 1943, en plena guerra, con las manadas de lobos submarinos alemanes patrullando todo el Atlántico a la caza de transportes aliados. Ese día, el submarino alemán U-537 partió de Kiel hacia Bergen, en Noruega. El 30 de septiembre se adentró en el Atlántico, con rumbo descnocido. Por supuesto, el Capitán Peter Schrewe sabía con detalle el lugar hacia el que se dirigía la nave: Canadá o, más concretamente, un remoto punto en la costa de la Península del Labrador.

¿Qué se le había perdido a un submarino alemán en las costas árticas de Canadá? Eso todavía es un misterio porque la misión principal también tiene su enigma. Las órdenes dictaban surcar el océano hasta un lejano punto en el que un científico, el Doctor Kurt Sommermeyer, y su equipo de investigación, dedicarían un tiempo a instalar una estación meteorológica, la Wetter-Funkgerät (WFL) número 26, un sofisticado sistema de monitorización meteorológica automática fabricado por Siemens, del que se instalaron variantes en diversos lugares de las frías aguas del Ártico. Y nada más, se instaló el artilugio, se activó y el submarino se largó de costas hostiles antes de poder llamar la atención, claro que, el lugar era tan inhóspito que era casi imposible ser vistos por nadie.

Toda la operación en tierra se desarrolló con admirable precisión, en apenas dos días, desde la llegada el U-537 a la costa canadiense el 22 de octubre. El 8 de diciembre de 1943 el submarino entró en el puerto de Lorient, en la Francia ocupada, dando así por cumplida su misión. Para desgracia de la tripulación del U-537, apenas un año después de su misión meteorológica, el submarino perecería en una acción de guerra. El Doctor Sommermeyer, sin embargo, sobrevivió a la guerra, gracias a lo que hoy se puede conocer su aventura.

Antes de continuar con la narración, conviene destacar qué tenía de especial la Wetter-Funkgerät 26. Era una verdadera maravilla. Hoy, cuando tanto se habla de redes de sensores inalámbricos y de sistemas de monitorización ambiental a distancia, conviene recordar que ingenios como aquella estación secreta fueron realmente artilugios adelantados a su tiempo. La estación 26 consistía en diez contenedores preparados para la intemperie. Uno de ellos contenía instrumentos de grabación de datos, otro servía de base para una antena de radio de diez metros y el resto eran algunas de las primeras baterías Níquel-Cadmio jamás construidas, pensadas para alimentar la estación durante meses. Súmese a todo ello varias boyas para realizar mediciones en aguas superficiales y otros mástiles para un anemómetro junto a aparatos similares y tendremos una extraña estación que, de forma automática, tomaba lecturas de la temperatura, la velocidad y dirección del viento, presión y humedad. Los datos eran radiados en “paquetes” de datos comprimidos cada tres horas.

Lo que debían ser al menos seis meses de funcionamiento, no pasó de unos pocos días. Los alemanes no pudieron captar más que unos pocos días las señales que enviaba la lejana estación, puesto que aparecieron unas interferencias desconocidas que no les permitieron continuar anotando las lecturas. Además, la guerra estaba cambiando su rumbo y Alemania ya no podía permitirse enviar otro submarino a Canadá para averiguar qué había sucedido con su avanzada estación. Todo quedó en el olvido, el ingenio, la historia del U-537 y hasta la mera existencia de la operación y de sus objetivos finales.

Nada sobre la Wetter-Funkgerät 26 fue escrito durante décadas, nadie sabía que los aparatos continuaban allá, perdidos en el Ártico, salvo el Doctor Sommermeyer y sus colaboradores, por supuesto. Y así pudo seguir toda esta historia si no fuera porque a finales de los años setenta un ingeniero de la Siemens recién jubilado, Franz Selinger, decidió escribir un libro sobre las estaciones meteorológicas que la empresa había diseñado y construido en toda su existencia. Lo que el ingeniero no sabía era que, en lo más profundo de los archivos de la Siemens dormían los planos, fotografías e informes de Sommermeyer. Allí, ante sus narices, aparecía algo que no cuadraba con ninguno de los datos de otras estaciones fabricadas por la empresa, un ingenio realmente excepcional que nadie sabía dónde había ido a parar. Después de muchos años de revisar otros archivos, y gracias a la ayuda de varios historiadores y del hijo del Doctor Sommermeyer, al fin se pudo identificar el submarino que aparecía en las fotos, era el U-537. Fragmentos del cuaderno de bitácora de la nave, presentes en los archivos de Friburgo, no dejaban lugar a dudas: la estación fue instalada en Canadá. Para sorpresa de todos, cuando en 1980 fue alertado el servicio de guardacostas canadiense para que oteara las desnudas costas del Labrador en el lugar marcado en el mapa, se descubrió la estación. Allí estaba, perdida y olvidada, prácticamente intacta, con sus baterías e instrumentos hacía largo tiempo muertos. Durante casi cuatro décadas nadie había reparado en su presencia. Hoy los restos de la Wetter-Funkgerät 26 pueden contemplarse en un entorno mucho más amigable, una sala del Canadian War Museum, en Ottawa. (La fotografía que acompaña este artículo muestra cómo se expone actualmente la estación en el citado museo).

Más información:
BLDGBLOG – The Annals of Weather Warfare.
BNET – Germany in the Artic: the little known story of Labrador’s WWII weather stations.
Nazi Kurt captured in Arctic Circle in 1981.
Weather station Kurt erected in Labrador in 1943.


Follow

Get every new post delivered to your Inbox.

Join 3,139 other followers