Thursday, August 31, 2006

Improving propagation with ionized meteor trails

From the Wikipedia definition of "Meteor Burst Communications":

Meteor burst communications, or MBC for short, is a radio propagation mode that exploits the ionized trails of meteors during atmospheric entry to establish brief communications paths between radio stations up to 2200 kilometers (1400 miles) apart. It is also referred to as meteor scatter communications in some documents.

As the earth moves along its orbital path, tens of thousands of particles known as meteors enter the upper atmosphere. When these meteors enter the atmosphere and begin to burn up, they create a trail of ionized particles that can persist for up to several seconds. The ionization trails can be very dense, and used to reflect radio waves. The frequencies that can be reflected by any particular ion trail are determined by the intensity of the ionization created by the meteor, often a function of the initial size of the particle, and is generally between 20 MHz and 500 MHz.

The distance over which communications can be established is determined by the altitude at which the ionization is created, the location over the surface of the earth where the meteor is falling, the angle of entry into the atmosphere, and the relative locations of the stations attempting to establish communications. Because these ionization trails only exist for fractions of a second to as long as a few seconds in duration, they create only brief windows of opportunity for communications.

A nice read for anyone interested on radio communications. The known origins go back to 1929 when a Japanese individual, Hantaro Nagaoka, reported the interaction between meteors and radio waves propagation. Later in 1931, it was noticed that long distance propagation occurred at times of major meteor showers for a short time.

Wednesday, August 16, 2006

ServerBeach: a highly recommended dedicated hosting provider

Sometimes things look like they're not going to work. This often happens when there's a rare condition in place which automatically turns to be a show stopper. When a company has a strict policy (not about the usual encumbrances like file sharing, copyrighted work, etc), it often discourages few potential customers from actually paying for truly nice services. Maybe the company offers competitive pricing and quality of service, but the barrier is just set too low and customers leave in the middle of the process. This isn't the case of ServerBeach, originally a Texas-based (thanks Richard for pointing this out) dedicated hosting provider with probably the most competitive pricing in the market, now owned by the Canadian company Peer1.

The balance between costs and quality doesn't get hurt with this company. Without unnecessary buzz wording, our very recent experience with them shows that the guys in the sales and customer service team can handle very concrete, exceptional requests, which may require handling of serious documentation and other formalities (ex. legal documentation, information that should be handled discretely as confidential, etc).

While the so-called (and nowadays widespread) "sales live-chat" may be the only way for others out there, with these people you can actually get your phone, dial their number and get in touch with the team. If at first time you think you aren't going to get what you want, you just have to ask them over the phone and discuss on the possibilities. Don't be afraid of explaining your needs, if they handled ours, they can do the same with yours for sure ;).

They are very prompt at giving you possible solutions with your problems, and it’s not the typical sales conversation about the pricing and "extremely good services" of the company. There’s no buzz. They tell you what you need and what they need and want from you.

The pricing is the lowest we’ve found so far and it allows you to customize the order and make it suitable for your needs. For a monthly fee (without setup costs!) of US $129.00 you get a pretty decent server, and they allow you (and also support) running game servers, good thing for gamers out there seeking decent ping times/latency. At such a low price, it’s not wise to avoid giving it a try.

Thursday, August 10, 2006

Light over Ruby on Rails "critical security upgrade" 1.1.5

Yesterday, the core development team of the Ruby on Rails framework, announced in their weblog a critical security upgrade (1.1.5) which should be applied immediately.

*1.1.5* (August 8th, 2006)

* Mention in docs that config.frameworks doesn't work when getting Rails via Gems. #4857 [Alisdair McDiarmid]

* Change the scaffolding layout to use yield rather than @content_for_layout. [Marcel Molina Jr.]

* Includes critical security patch

Without giving details on the issue ("The issue is in fact of such a criticality that we’re not going to dig into the specifics. No need to arm would-be assalients") some people started reviewing the diff between the 1.1.5 and 1.1.4 releases, right away from the Subversion repository. A few hot spots have been indentified so far, one of them being a clear example of a potentially exploitable condition:


@@ -268,7 +273,7 @@
$LOAD_PATH.select do |base|

base = File.expand_path(base)

extended_root = File.expand_path(RAILS_ROOT)

- base[0, extended_root.length] == extended_root || base =~ %r{rails-[\d.]+/builtin}

+ base.match(/\A#{Regexp.escape(extended_root)}\/*#{file_kinds(:lib) * '|'}/)

|| base =~ %r{rails-[\d.]+/builtin}

end

else

$LOAD_PATH

Some already have explained the issue (thanks Brandon for the links), although the development of a proof of concept by some Russian folks brought new issues to the attention of some RoR users. Basically, the fix introduced new problems, apparently.

The original issue was about the possibility of forcing Ruby to load arbitrary code after uploading a file and injecting a path to the LOAD_PATH variable through the HTTP_LOAD_PATH header (which is managed client-side... thus the user has control over it). The flaw in the routing code would allow this file (ex. a controller) to be loaded and executed. This would happen when requesting an URL with the controller name and one of it's methods, leadind the routing engine to walk through LOAD_PATH for the file and then executing it: http://railshost.tld/evil_controller/the_method.

Side-note: If $SAFE mode is enabled, at it's lowest level it's supposed to prevent code from being executed from world-writable directories, thus it shouldn't work from directories like /tmp. This probably doesn't apply to certain win32 installations anyways.

The newly discovered issues are related to loading built-in (already available) code through the autorouting engine, this causes different kinds of situations, from crashes to infinite recursion bugs (leading to the infamous 'Stack level too deep' error).

Nope, guys, the routing problems aren't fully fixed, and one still can require about 500 .rb files from standard Rails vendor/* directories just typing some text as URL in browser.

Apparently, a bug report ticket was submitted time ago:

Where is test code for that "patch"?

And there was already a ticket "#5408 Unhandled urls can cause loading of arbitrary ruby files" on Rails TRAC from 06/16 about mentioned issues...

Although, a follow-up comments that 1.1.5 upgrade fixes the situation for everything except Webrick:

The rails team is trying to contain damage. The fix works on everything except for webrick.

While the current status might have a lower immediate risk, controllers are expected to work all-together with other code, thus managing to execute methods within this controllers could still lead to exploitable conditions (ex. methods involved on filesystem-related operations) and such cases might be waiting for someone to take advantage of them.