tag:blogger.com,1999:blog-87088450002364110852023-11-15T06:53:00.210-08:00Flink-ItFlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.comBlogger44125tag:blogger.com,1999:blog-8708845000236411085.post-42898827057328600072015-05-13T02:45:00.000-07:002015-05-13T02:46:34.745-07:00NAGIOS plugin easy alternative for check_by_ssh<div class="dict_inner" id="dict_content">
this is simple wrapper "python script" for monitoring hosts by hop firewalled port </div>
<br />
<span style="font-weight: bold;">Repository</span><br />
<a href="https://github.com/luupux/switch-nrpe">https://github.com/luupux/switch-nrpe</a><br />
<br />
<span style="font-weight: bold;">Example</span><br />
<br />
<span style="font-weight: bold;">Direct access </span><br />
"check_swap on<span style="color: red;"> <span style="color: #009900;">direct.access.host</span></span> "<br />
<div style="text-align: left;">
<span style="font-size: 85%;"><span class="currency_converter_text"> <span style="font-weight: bold;">switch_nrpe.py</span> -t </span><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount">20</span></span></span></span></span></span></span></span><span class="currency_converter_text"> -H </span><span style="color: #009900;">direct.access.host</span><span class="currency_converter_text"> -c check_swap -a </span><span class="currency_converter_text"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount">20</span></span></span></span></span></span></span><span class="currency_converter_text">% </span><span class="currency_converter_text"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount">10</span></span></span></span></span></span></span><span class="currency_converter_text">%</span></div>
<br />
<span style="font-weight: bold;">Hop Access </span><br />
"check_swap on <span style="color: red;">host.not.direct.access</span> using <span style="color: #009900;">direct.access.host</span> gw"<br />
<div style="text-align: left;">
<span style="font-size: 85%;"> <span style="font-weight: bold;">switch_nrpe.py</span> --fhost=<span style="color: red;">host.not.direct.access</span><span class="currency_converter_text"> --fhop=<span style="color: #009900;">direct.access.hos</span>t --fcmd=check_swap </span><span class="currency_converter_text"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount">20</span></span></span></span></span></span></span></span><span class="currency_converter_text">% </span><span class="currency_converter_text"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount"><span class="currency_converter_link" title="Convert this amount">10</span></span></span></span></span></span></span><span class="currency_converter_text">%</span><br />
<br />
<br />
<br />
<br /></div>
FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com1tag:blogger.com,1999:blog-8708845000236411085.post-38630139342961794602010-12-30T05:37:00.000-08:002015-05-13T02:43:48.593-07:00Nagios simple check multi ping for degrade gateway providerThis is a simple check "multi ping" for monitoring performance from default gateway on nagios<br />
<br />
<a href="https://github.com/luupux/check_gw">https://github.com/luupux/check_gw</a><br />
<br />
<br />
<br />FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-34633396316785769252010-11-15T04:03:00.000-08:002010-11-15T04:16:41.238-08:00Free SMSes through Google Calendar by http://www.kryogenix.org<pre>This is a simple python code for send sms from google calendar via gdata api<br /><br /># Requires gdata.py-1.2.1 from http://code.google.com/p/gdata-python-client/<br />try:<br />from xml.etree import ElementTree<br />except ImportError:<br />from elementtree import ElementTree<br />import gdata.calendar.service<br />import gdata.service<br />import atom.service<br />import gdata.calendar<br />import atom<br />import base64<br />import time<br /><br />def send_sms(message_text):<br />cal_client = gdata.calendar.service.CalendarService()<br />cal_client.email = "YOUR GOOGLE MAIL ACCOUNT"<br />cal_client.password = "YOUR GOOGLE MAIL PASSWORD"<br />cal_client.source = 'calendar-sms-misuse-1.0'<br />cal_client.ProgrammaticLogin()<br /><br />event = gdata.calendar.CalendarEventEntry()<br />event.title = atom.Title(text=message_text)<br />event.content = atom.Content(text="")<br /><br /># can't set SMS reminders for under 5 minutes, so set this to 6 mins from now<br />start_time = time.strftime('%Y-%m-%dT%H:%M:%S.000Z', time.gmtime(time.time()+(6*60)))<br />end_time = time.strftime('%Y-%m-%dT%H:%M:%S.000Z', time.gmtime(time.time() + 3600))<br />when = gdata.calendar.When(start_time=start_time, end_time=end_time)<br /># can't set SMS reminders for under 5 minutes, so set this to 5<br />reminder = gdata.calendar.Reminder(minutes=5, extension_attributes={"method":"sms"})<br />when.reminder.append(reminder)<br />event.when.append(when)<br /><br />cal_client.InsertEvent(event, '/calendar/feeds/default/private/full')<br /><br /><br />send_sms("Message body")<br /><br />Original Post<br />http://www.kryogenix.org/days/2008/10/15/free-smses-through-google-calendar<br /><br /><br /><br /></pre>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com4tag:blogger.com,1999:blog-8708845000236411085.post-64192858250975765202010-08-20T04:31:00.000-07:002010-08-20T04:33:17.367-07:00Multi-core, Threads & Message Passing<a href="http://feeds.igvita.com/%7Er/igvita/%7E3/IkMxuajPRbM/">Multi-core, Threads & Message Passing</a>: "<p><img src="http://www.igvita.com/posts/10/multi-core.png" style="margin-right: 1em; margin-top: 0.75em;" align="left" /><a href="http://en.wikipedia.org/wiki/Moore%27s_law">Moore's Law</a> marches on, the transistor counts are continuing to increase at the predicted rate and will continue to do so for the foreseeable future. However, what has changed is <a href="http://en.wikipedia.org/wiki/Moore%27s_law#Transistor_count_versus_computing_performance">where</a> these transistors are going: instead of a single core, they are appearing in multi-core designs, which place a much higher premium on hardware and software parallelism. This is hardly news, I know. However, before we get back to arguing about the <strong>'correct'</strong> parallelism & concurrency abstractions (threads, events, actors, channels, and so on) for our software and runtimes, it is helpful to step back and take a closer look at the actual hardware and where it is heading.</p><br /><h4><strong>Single Core Architecture & Optimizations</strong></h4><br /><p><img src="http://www.igvita.com/posts/10/single-core.png" style="margin-right: 1em;" align="left" /></p><br /><p>The conceptual architecture of a single core system is deceivingly simple: single CPU, which is connected to a block of memory and a collection of other I/O devices. Turns out, simple is not practical. Even with modern architectures, the latency of a main memory reference (~100ns roundtrip) is prohibitively high, which combined with highly unpredictable control flow has led CPU manufacturers to introduce <a href="http://en.wikipedia.org/wiki/CPU_cache#Multi-level_caches">multi-level caches</a> directly onto the chip: Level 1 (L1) cache reference: ~0.5 ns; Level 2 (L2) cache reference: ~7ns, and so on.</p><br /><p>However, even that is not enough. To keep the CPU busy, most manufacturers have also introduced some cache prefetching and management schemes (ex: Intel's <a href="http://www.intel.com/technology/product/demos/cache/demo.htm">SmartCache</a>), as well as invested billions of dollars into <a href="http://en.wikipedia.org/wiki/Branch_predictor">branch prediction</a>, <a href="http://en.wikipedia.org/wiki/Instruction_pipeline">instruction pipelining</a>, and other tricks to squeeze every ounce of performance. After all, if the CPU has a separate floating point and an integer unit, then there is no reason why two threads of execution could not simultaneously run on the same chip - see <a href="http://en.wikipedia.org/wiki/Simultaneous_multithreading">SMT</a>. Remember Intel's <a href="http://en.wikipedia.org/wiki/Hyper-threading">Hyperthreading</a>? As another point of reference, Sun's <a href="http://en.wikipedia.org/wiki/UltraSPARC_T1">Niagara chips</a> are designed to run <a href="http://www.eetimes.com/electronics-news/4059429/Panel-confronts-multicore-pros-and-cons">four execution threads</a> per core.</p><br /><p>But wait, how did threads get in here? Turns out, threads are a way to expose the potential (and desired) hardware parallelism to the rest of the system. Put another way,<strong> threads are a low-level hardware and operating system feature</strong>, which we need to take full advantage of the underlying capabilities of our hardware.</p><br /><h4><strong>Architecting for the Multi-core World</strong></h4><br /><p>Since the manufacturers could no longer continue scaling the single core (power, density, communication), the designs have shifted to the next logical architecture: multiple cores on a single chip. After all, hardware parallelism existed all along, so the conceptual shift wasn't that large - shared memory, multiple cores, more concurrent threads of execution. Only one gotcha, remember those L1, L2 caches we introduced earlier? Turns out, they may well be the Achilles' heel for multi-core.</p><br /><p align="left"><img src="http://www.igvita.com/posts/10/multi-core-architecture.png" /></p><br /><p>If you were to design a multi-core chip, would you allow your cores to share the L1, or L2 cache, or should they all be independent? Unfortunately, there is one answer to this question. Shared caches can allow higher utilization, which may lead to power savings (ex: great for laptops), as well as higher hit rates in certain scenarios. However, that same shared cache can easily create resource contention if one is not careful (DMA is a <a href="http://en.wikipedia.org/wiki/Direct_memory_access#Cache_coherency_problem">known offender</a>). Intel's Core Duo and Xeon processors use a shared L2, whereas AMD's Optetron, Athlon, and Intel's Pentium D opted out for independent L1's and L2's. Even more interestingly, Intel's recent Itanium 2 gives each core an independent L1, L2, and an L3 cache! <strong>Different workloads benefit from different layouts</strong>.</p><br /><p><img src="http://www.igvita.com/posts/10/coherence.png" style="margin-right: 1em;" align="left" />As Phil Karlton once famously said:<em> 'There are only two hard things in Computer Science: cache invalidation and naming things,'</em> and as someone cleverly added later, <em>'and off by one errors'</em>. Turns out, <a href="http://en.wikipedia.org/wiki/Cache_coherency">cache coherency</a> is a major problem for all multi-core systems: if we prefetch the same block of data into an L1, L2, or L3 of each core, and one of the cores happens to make a modification to its cache, then we have a problem - the data is now in an inconsistent state across the different cores. We can't afford to go back to main memory to verify if the data is valid on each reference (as that would defeat the purpose of the cache), and a shared mutex is the very anti-pattern of independent caches!</p><br /><p>To address this problem, hardware designers have iterated over a number of data invalidation and propagation schemes, but the key point is simple: the cores share a bus or an interconnect over which messages are propagated to keep all of the caches in sync (<em>coherent</em>), and therein lies the problem. While, the numbers vary, the overall consensus is that after <a href="http://www.csa.com/discoveryguides/multicore/review3.php">approximately 32 cores</a> on a single chip, the amount of required communication to support the shared memory model leads to diminished performance. Put another way, <strong>shared memory systems have limited scalability</strong>.</p><br /><h4><strong>Turtles all the way down: Distributed Memory</strong></h4><br /><p><img src="http://www.igvita.com/posts/10/distributed-memory.png" style="margin-right: 1em;" align="left" />So if cache coherence puts an upper bound on the number of cores we can support within the shared memory model, then lets drop the shared memory requirement! What if, instead of a monolithic view of the memory, each core instead had its own, albeit much smaller main memory? <a href="http://en.wikipedia.org/wiki/Distributed_memory">Distributed memory</a> model has the advantage of avoiding all of the cache coherency problems we listed above. However, it is also easy to imagine a number of workloads where the distributed memory will underperform the shared memory model.</p><br /><p>There doesn't appear to be any consensus in the industry yet, but if one had to guess, then a hybrid model seems likely: <strong>push the shared memory model as far as you can, and then stamp it out multiple times on a chip, with a distributed memory interconnect</strong> - it is cache and interconnect turtles all the way down. In other words, while message passing may be a choice today, in the future, it may well be a requirement if we want to extract the full capabilities of the hardware.</p><br /><h4><strong>Turtles all the way up: Web Architecture</strong></h4><br /><p><img src="http://www.igvita.com/posts/10/web-cores.png" style="margin-right: 1em;" align="left" />Most interesting of all, we can find the exact same architecture patterns and their associated problems in the web world. We start with a single machine running the app server and the database (CPU and main memory), which we later split into separate instances (multiple app servers share a remote DB, aka 'multi-core'), and eventually we shard the database (distributed memory) to achieve the required throughput. The similarity of the challenges and the approaches seems hardly like a coincidence. It is turtles all the way down, and it is turtles all the way up.</p><br /><h4><strong>Threads, Events & Message Passing</strong></h4><br /><p>As software developers, we are all intimately familiar with the shared memory model and the good news is: it is not going anywhere. However, as the core counts continue to increase, it is also very likely that we will quickly hit diminishing returns with the existing shared memory model. So, while we may disagree on whether threads are a correct application level API (see <a href="http://en.wikipedia.org/wiki/Process_calculus">process calculi variants</a>), they are also not going anywhere - either the VM, the language designer, or you yourself will have to deal with them.</p><br /><p>With that in mind, the more interesting question to explore is not which abstraction is 'correct' or 'more performant' (one can always craft an optimized workload), but rather how do we make all of these paradigms work together, in a context of a simple programming model?<strong> We need threads, we need events, and we need message passing - it is not a question of which is better</strong>.</p><br /><div><br /><a href="http://feeds.igvita.com/%7Eff/igvita?a=IkMxuajPRbM:b9Rj94d3-rs:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/%7Eff/igvita?d=yIl2AUoC8zA" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=IkMxuajPRbM:b9Rj94d3-rs:D7DqB2pKExk"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=IkMxuajPRbM:b9Rj94d3-rs:D7DqB2pKExk" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=IkMxuajPRbM:b9Rj94d3-rs:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=IkMxuajPRbM:b9Rj94d3-rs:F7zBnMyn0Lo" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=IkMxuajPRbM:b9Rj94d3-rs:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=IkMxuajPRbM:b9Rj94d3-rs:V_sGLiPBpWU" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=IkMxuajPRbM:b9Rj94d3-rs:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=IkMxuajPRbM:b9Rj94d3-rs:gIN9vFwOqvQ" border="0" /></a><br /></div><img src="http://feeds.feedburner.com/%7Er/igvita/%7E4/IkMxuajPRbM" width="1" height="1" />"FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-76254251260596202302010-06-30T23:48:00.000-07:002010-06-30T23:50:21.444-07:00Winning the Big Data SPAM Challenge__HadoopSummit2010<div style="width: 425px;" id="__ss_4653291"><strong style="display: block; margin: 12px 0pt 4px;"></strong><br /><object id="__sse4653291" height="355" width="425"><param name="movie" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=5bigdataspamchallangehadoopsummit2010-100630140401-phpapp02&stripped_title=5-big-dataspamchallangehadoopsummit2010"><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed name="__sse4653291" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=5bigdataspamchallangehadoopsummit2010-100630140401-phpapp02&stripped_title=5-big-dataspamchallangehadoopsummit2010" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" height="355" width="425"></embed></object><div style="padding: 5px 0pt 12px;">View more presentations from <a href="http://www.slideshare.net/ydn">Yahoo Developer Network</a>.</div></div>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-90190142621459087032010-06-30T23:45:00.000-07:002010-06-30T23:51:27.785-07:00App Engine SDK 1.3.5 Released With New Task Queue, Python Precompilation, and Blob Features<h2 class="date-header"><span></span></h2> <div class="date-posts"> <div class="post-outer"> <div class="post hentry"> <a name="8060975897784113561"></a> </div></div></div><p>Today we are happy to announce the 1.3.5 release of the App Engine SDK for both Python and Java developers.</p> <p>Due to popular demand, we have increased the throughput of the Task Queue API, from 50 reqs/sec per app to 50 reqs/sec per queue. You can also now specify the amount of storage available to the taskqueue in your app, for those with very large queues with many millions of tasks. Stay tuned for even more Task Queue scalability improvements in the future.</p> <p>Additionally, in this release we’ve also added support for precompilation of Python source files to match the same feature we launched for Java last year. For Python, you can now use precompilation to speed up application loading time and to reduce CPU usage for new app instances. You can enable precompilation by including the following lines in your app.yaml file:</p> <span style="font-family:courier new;">derived_file_type:</span><br /><span style="font-family:courier new;">- python_precompiled</span><br /><p>This will start offline precompilation of Python modules used by your app when you deploy your application. Currently precompliation is off by default for Python applications, but it will be enabled by default in some future release. (Java precompilation has been enabled by default since the release of 1.3.1.)</p> <p>To give you a taste of what this feature is like, we tested this on a modified version of <a href="http://code.google.com/p/rietveld">Rietveld</a> (which included a copy of Django 1.0.4 in the app directory, and which did not use the datastore in its base url). The latency and CPU usage results for the initial load of the application, after uploading a new version of the app and requesting the homepage, were:</p> Before precompilation enabled:<br /><span style="font-family:courier new;">Test 1: 1450ms 1757cpu_ms</span><br /><span style="font-family:courier new;">Test 2: 1298ms 1523cpu_ms</span><br /><span style="font-family:courier new;"> Test 3: 1539ms 1841cpu_ms</span><br />After precompilation enabled:<br /><span style="font-family:courier new;">Test 1: 805ms 669cpu_ms</span><br /><span style="font-family:courier new;">Test 2: 861ms 702cpu_ms</span><br /><span style="font-family:courier new;"> Test 3: 921ms 803cpu_ms</span><br /><p>Of course, any individual app’s performance will vary, so we recommend that you experiment with the setting for your application. Please submit your feedback and results to the <a href="http://code.google.com/appengine/community.html">support group!</a></p> <p>In addition to Task Queues and Python precompilation, we have made a few changes to the Blobstore in 1.3.5 First, we have added file-like interfaces for reading Blobs. In Python, this is supported through the <a href="http://code.google.com/appengine/docs/python/blobstore/blobreaderclass.html">BlobReader</a> class<docs link="">. In Java, we have implemented the <a href="http://code.google.com/appengine/docs/java/javadoc/com/google/appengine/api/blobstore/BlobstoreInputStream.html">BlobstoreInputStream</a> class<docs link="">, which gives an InputStream view of the blobs stored in Blobstore.</docs></docs></p>http://googleappengine.blogspot.com/2010/06/app-engine-sdk-135-released-with-new.htmlFlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-86448525651887483632010-06-28T00:53:00.000-07:002010-06-30T05:13:10.955-07:00Weak Consistency and CAP Implications<a href="http://feeds.igvita.com/%7Er/igvita/%7E3/H2zaSt5B2PU/">Weak Consistency and CAP Implications</a>: "<p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/cap-network.png" align="left" />Migrating your web application from a single node to a distributed setup is always a deceivingly large architectural change. You may need to do it due to a resource constraint of a single machine, for better availability, to decouple components, or for a variety of other reasons. Under this new architecture, each node is on its own, and a network link is present to piece it all back together. So far so good, in fact, ideally we would also like for our new architecture to provide a few key properties: <em><strong>C</strong>onsistency</em> (no data conflicts), <em><strong>A</strong>vailability</em> (no single point of failure), and <em><strong>P</strong>artition tolerance</em> (maintain availability and consistency in light of network problems).</p><br /><p>Problem is, the <a href="http://en.wikipedia.org/wiki/CAP_theorem">CAP theorem</a> proposed by Eric Brewer and later <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.20.1495&rep=rep1&type=pdf">proved by Seth Gilbert and Nancy Lynch</a>, shows that together, these three requirements are impossible to achieve at the same time. In other words, in a distributed system with an unreliable communications channel, it is impossible to achieve consistency and availability at the same time in the case of a network partition. Alas, such is the tradeoff.</p><br /><h4><strong>'Pick Two' is too simple</strong></h4><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/cp-ca-pa.png" align="left" />The <a href="http://www.cs.berkeley.edu/%7Ebrewer/cs262b-2004/PODC-keynote.pdf">original CAP conjecture</a> presented by Eric Brewer states that as architects, we can only pick two properties (CA, CP, or PA) at the same time, and many attempts have since been made to classify different distributed architectures into these three categories. Problem is, as <a href="http://dbmsmusings.blogspot.com/2010/04/problems-with-cap-and-yahoos-little.html">Daniel Abadi recently pointed out</a> (and <a href="http://twitter.com/eric_brewer/status/13057435836">Eric Brewer agrees</a>), the relationships between CA, CP and AP are not nearly as clear-cut as they appear on paper. In fact, any attempt to create a hard partitioning into these buckets seems to only <a href="http://blog.nahurst.com/visual-guide-to-nosql-systems">increase the confusion</a> since many of the systems can arbitrarily shift their properties with just a few operational tweaks - in the real world, it is rarely an all or nothing deal.</p><br /><h4><strong>Focus on Consistency</strong></h4><br /><p>Following some great conversations about CAP at a recent <a href="http://nosqlsummer.org/">NoSQL Summer</a> meetup and hours of trying to reconcile all the edge cases, it is clear that the CA <em>vs.</em> CP <em>vs.</em> PA model is, in fact, a poor representation of the implications of the CAP theorem - the simplicity of the model is nice, but in reality the actual design space requires more nuance. Specifically, instead of focusing on all three properties at once, it is more productive to first focus along the continuum of “data consistency” options: none, weak, and full.</p><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/consistency-cap.png" align="left" />On one extreme, a system can demand no consistency. For example, a clickstream application which is used for best effort personalization can easily tolerate a few missed clicks. In fact, the data may even be partitioned by data centre, geography, or server, such that depending on where you are, a different “context” is applied - from home, your search returns one set of results, from work, another! The advantage of such a system is that it is inherently highly available (HA) as it is a share nothing, best effort architecture.</p><br /><p>On the other extreme, a system can demand full consistency across all participating nodes, which implies some communications protocol to reach a consensus. A canonical example is a “debit / credit” scenario where full agreement across all nodes is required prior to any data write or read. In this scenario, all nodes maintain the exact same version of the data, but compromise HA in the process - if one node is down, or is in disagreement, the system is down.</p><br /><h4><strong>CAP Implies Weak Consistency</strong></h4><br /><p>Strong consistency and high availability are both desirable properties, however the CAP theorem shows that we can’t achieve both of these over an unreliable channel at once. Hence, CAP pushes us into a <em>“weak consistency”</em> model where dealing with failures is a fact of life. However, the good news is that we do have a gamut of possible strategies at our disposal.</p><br /><p align="center"><img src="http://www.igvita.com/posts/10/cap-space.png" align="center" /></p><br /><p>In case of a failure, your first choice could be to choose consistency over availability. In this scenario, if a <a href="http://en.wikipedia.org/wiki/Quorum_%28Distributed_Systems%29">quorum</a> can be reached, then one of the network partitions can remain available, while the second goes offline. Once the link between the two networks is restored, a simple data repair can take place - the minority partition is strictly behind, hence there are no possible data conflicts. Hence we sacrifice HA, but do continue to serve some of the clients.</p><br /><p>On the other hand, we could lean towards availability over consistency. In this case, both sides can continue to accept reads and/or writes. Both sides of the partition remain available, and mechanisms such as <a href="http://en.wikipedia.org/wiki/Vector_clock">vector clocks</a> can be used to assist with conflict resolution (although, some conflicts will always require application level resolution). Repeatable reads, read-your-own-writes, and quorum updates are just a few of the examples of possible consistency <em>vs.</em> availability strategies in this scenario.</p><br /><p>Hence, a simple corollary to the CAP theorem: <em>when choosing availability under the weak consistency model, multiple versions of a data object will be present, will require conflict resolution, and it is up to your application to determine what is an acceptable consistency tradeoff and a resolution strategy for each type of object</em>.</p><br /><h4><strong>Speed of Light: Too Slow for PNUTS!</strong></h4><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/yahoo.png" align="left" />Interestingly enough, dealing with network partitions is not the only case for adopting “weak consistency”. The <a href="http://portal.acm.org/citation.cfm?id=1454167">PNUTS</a> system deployed at Yahoo must deal with WAN replication of data between different continents, and unfortunately, the speed of light imposes some strict latency limits on the performance of such a system. In Yahoo’s case, the communications latency is enough of a performance barrier such that their system is configured, by default, to operate under the “choose availability, under weak consistency” model - think of latency as a pseudo-permanent network partition.</p><br /><h4><strong>Architecting for Weak Consistency</strong></h4><br /><p>Instead of arguing over CA <em>vs.</em> CP <em>vs.</em> PA, first determine the consistency model for your application: strong, weak, or shared nothing / best effort. Notice that this choice has nothing to do with the underlying technology, and everything with the demands and the types of data processed by your application. From there, if you land in the weak-consistency model (and you most likely will, if you have a distributed architecture), start thinking how you can deal with the inevitable data conflicts: will you lean towards consistency and some partial downtime, or will you optimize for availability and conflict resolution?</p><br /><p>Finally, if you are working under weak consistency, it is also worth noting that it is not a matter of picking just a single strategy. Depending on the context, the application layer can choose a different set of requirements for each data object! Systems such as <a href="http://project-voldemort.com/">Voldemort</a>, <a href="http://cassandra.apache.org/">Cassandra</a>, and <a href="http://en.wikipedia.org/wiki/Dynamo_%28storage_system%29">Dynamo</a> all provide mechanisms to specify a desired level of consistency for each individual read and write. So, an order processing function can fail if it fails to establish a quorum (consistency over availability), while at the same time, a new user comment can be accepted by the same data store (availability over consistency).</p><br /><div><br /><a href="http://feeds.igvita.com/%7Eff/igvita?a=H2zaSt5B2PU:ouBfDdYTHPM:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/%7Eff/igvita?d=yIl2AUoC8zA" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=H2zaSt5B2PU:ouBfDdYTHPM:D7DqB2pKExk"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=H2zaSt5B2PU:ouBfDdYTHPM:D7DqB2pKExk" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=H2zaSt5B2PU:ouBfDdYTHPM:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=H2zaSt5B2PU:ouBfDdYTHPM:F7zBnMyn0Lo" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=H2zaSt5B2PU:ouBfDdYTHPM:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=H2zaSt5B2PU:ouBfDdYTHPM:V_sGLiPBpWU" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=H2zaSt5B2PU:ouBfDdYTHPM:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=H2zaSt5B2PU:ouBfDdYTHPM:gIN9vFwOqvQ" border="0" /></a><br /></div><img src="http://feeds.feedburner.com/%7Er/igvita/%7E4/H2zaSt5B2PU" height="1" width="1" />"FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com2tag:blogger.com,1999:blog-8708845000236411085.post-5060826192333177822010-06-09T06:39:00.000-07:002010-06-30T05:11:53.038-07:00Rails Performance Needs an Overhaul<a href="http://feeds.igvita.com/%7Er/igvita/%7E3/mNpthf4nAys/">Rails Performance Needs an Overhaul</a>: "<p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/drivetrain.png" align="left" />Browsers are getting faster; JavaScript frameworks are getting faster; MVC frameworks are getting faster; databases are getting faster. And yet, even with all of this innovation around us, it feels like there is massive gap when it comes to the end product of delivering an effective and scalable service as a developer: the performance of most of our web stacks, when measured end to end is poor at best of times, and plain terrible in most.</p><br /><p>The fact that a vanilla Rails application requires a dedicated worker with a <strong>50MB</strong> stack to render a login page is nothing short of absurd. There is nothing new about this, nor is this exclusive to Rails or a function of Ruby as a language - whatever language or web framework you are using, chances are, you are stuck with a similar problem. But <a href="http://www.igvita.com/2008/11/13/concurrency-is-a-myth-in-ruby/">GIL or no GIL</a>, we ought to do better than that. Node.js is a recent innovator in the space, and as a community, we can either learn from it, or ignore it at our own peril.</p><br /><h4><strong>Measuring End-to-End Performance</strong></h4><br /><p align="center"><img src="http://www.igvita.com/posts/10/webstack.png" align="center" /></p><br /><p>A modern web-service is composed of many moving components, all of which come together to create the final experience. First, you have to model your data layer, pick the database and then ensure that it can get your data in and out in the required amount of time - lots of innovation in this space thanks to the NoSQL movement. Then, we layer our MVC frameworks on top, and fight religious wars as developers on whose DSL is more beautiful - to me, Rails 3 deserves all the hype. On the user side, we are building faster browsers with blazing-fast JavaScript interpreters and CSS engines. However, the driveshaft (the app server) which connects the two pieces (the engine: data & MVC), and the front-end (the browser + DOM & JavaScript), is often just a checkbox in the deployment diagram. The problem is, this checkbox is also the reason why the ‘scalability’ story of our web frameworks is nothing short of terrible. </p><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/rails-servers.png" align="left" />It doesn't take much to construct a <a href="http://www.slideshare.net/igrigorik/beyond-gem-install-mysql-in-ruby/20">pathological example</a> where a popular framework (Rails), combined with a popular database (MySQL), and a popular app server (<a href="http://github.com/fauna/mongrel">Mongrel</a>) produce less than stellar results. Now the finger pointing begins. MySQL is more than capable of serving thousands of concurrent requests, the app server also claims to be threaded, and the framework even allows us to configure a database pool!</p><br /><p>Except that, the database driver <a href="http://www.igvita.com/2008/10/27/scaling-activerecord-with-mysqlplus/">locks our VM</a>, and both the framework and the app server still have a few mutexes deep in their guts, which impose hard limits on the concurrency (read, serial processing). The problem is, this is the default behaviour! No wonder people <a href="https://www.google.com/search?hl=en&q=rails+is+slow&btnG=Search&aq=f&aqi=&aql=&oq=&gs_rfai=">complain about 'scalability'</a>. The other popular choices (<a href="http://www.modrails.com/">Passenger</a> / <a href="http://unicorn.bogomips.org/">Unicorn</a>) “<em>work around</em>” this problem by requiring dedicated VMs per request - that's not a feature, that's a bug!</p><br /><h4><strong>The Rails Ecosystem</strong></h4><br /><p>To be fair, we have come a long way since the days of WEBrick. In many ways, Mongrel made Rails viable, Rack gave us the much needed interface to become app-server independent, and the guys at Phusion gave us Passenger which both simplified the deployment, and made the resource allocation story moderately better. To complete the picture, Unicorn recently rediscovered the *nix IPC worker model, and is currently <a href="http://engineering.twitter.com/2010/03/unicorn-power.html">in use at Twitter</a>. Problem is, none of this is new (at best, we are iterating on the Apache 1.x to 2.x model), nor does it solve our underlying problem.</p><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/nodejs.png" align="left" />Turns out, while all the components are separate, and its great to treat them as such, we do need to look at the entire stack as one picture when it comes to performance: the database driver needs to be smarter, the framework should take advantage of the app servers capabilities, and the app server itself can't pretend to work in isolation.</p><br /><p>If you are looking for a great working example of this concept in action, look no further than <a href="http://nodejs.org/">node.js</a>. There is nothing about node that can't be reproduced in Ruby or Python (<a href="http://github.com/eventmachine/eventmachine">EventMachine</a> and <a href="http://twistedmatrix.com/trac/">Twisted</a>), but the fact that the framework forces you to think and use the right components in place (fully async & non-blocking) is exactly why it is currently grabbing the mindshare of the early adopters. Rubyists, Pythonistas, and others can ignore this trend at their own peril. Moving forward, end-to-end performance and scalability of any framework will only become more important.</p><br /><h4><strong>Fixing the 'Scalability' story in Ruby</strong></h4><br /><p><img style="margin-right: 1em;" src="http://www.igvita.com/posts/10/ruby-rails.png" align="left" />The good news is, for every outlined problem, there is already a working solution. With a little extra work, the driver story is <a href="http://github.com/oldmoe/mysqlplus">easily</a> <a href="http://github.com/igrigorik/em-mysqlplus">addressed</a> (MySQL driver is just an example, the same story applies to virtually every other SQL/NoSQL driver), and the frameworks are steadily removing the bottlenecks one at a time.</p><br /><p>After a <a href="http://en.oreilly.com/rails2010/public/schedule/detail/14096">few iterations at PostRank</a>, we rewrote some <a href="http://labs.postrank.com/">key drivers</a>, grabbed <a href="http://code.macournoyer.com/thin/">Thin</a> (evented app server), and made <a href="http://www.igvita.com/2010/03/22/untangling-evented-code-with-ruby-fibers/">heavy use of continuations</a> in Ruby 1.9 to create our own API framework (<em>Goliath</em>) which is perfectly capable of serving hundreds of concurrent requests at a time from within a single Ruby VM. In fact, we even managed to avoid all the callback spaghetti that plagues node.js applications, which also means that the <a href="http://www.igvita.com/2010/04/15/non-blocking-activerecord-rails/">same continuation approach works just as well</a> with a vanilla Rails application. It just baffles me that this is not a solved problem already.</p><br /><br /><p>The state of art in the end-to-end Rails stack performance is not good enough. We need to fix that.</p><br /><div><br /><a href="http://feeds.igvita.com/%7Eff/igvita?a=mNpthf4nAys:vIPdnvbHiBM:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/%7Eff/igvita?d=yIl2AUoC8zA" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=mNpthf4nAys:vIPdnvbHiBM:D7DqB2pKExk"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=mNpthf4nAys:vIPdnvbHiBM:D7DqB2pKExk" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=mNpthf4nAys:vIPdnvbHiBM:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=mNpthf4nAys:vIPdnvbHiBM:F7zBnMyn0Lo" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=mNpthf4nAys:vIPdnvbHiBM:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=mNpthf4nAys:vIPdnvbHiBM:V_sGLiPBpWU" border="0" /></a> <a href="http://feeds.igvita.com/%7Eff/igvita?a=mNpthf4nAys:vIPdnvbHiBM:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/%7Eff/igvita?i=mNpthf4nAys:vIPdnvbHiBM:gIN9vFwOqvQ" border="0" /></a><br /></div><img src="http://feeds.feedburner.com/%7Er/igvita/%7E4/mNpthf4nAys" height="1" width="1" />"FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-69092730607682001152010-05-27T08:08:00.000-07:002010-05-27T08:11:13.476-07:00The future can be written in RPython now | PyevolveFollowing the <a href="http://alexgaynor.net/2010/may/15/pypy-future-python/" title="PyPy is the Future of Python" target="_blank" onclick="javascript:pageTracker._trackPageview ('/outbound/alexgaynor.net');">recent article</a> arguing why <a href="http://pypy.org/" title="PyPy Home" target="_blank" onclick="javascript:pageTracker._trackPageview ('/outbound/pypy.org');">PyPy</a> is the future of Python, I must say, PyPy is not the future of Python, is the present. When I <a href="http://pyevolve.sourceforge.net/wordpress/?p=862" title="Pyevolve :: Pyevolve benchmark on different Python flavors" target="_blank">have tested</a> it last time (PyPy-c 1.1.0) with Pyevolve into the optimization of a simple Sphere function, it was at least 2x slower than Unladen Swallow Q2, but in that time, PyPy was not able to JIT. Now, with this new release of PyPy and the JIT’ing support, the scenario has changed.<br /><br /><br /><a href="http://pyevolve.sourceforge.net/wordpress/?p=1189">The future can be written in RPython now | Pyevolve</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-59309913570437099452010-05-26T23:01:00.000-07:002010-06-30T04:59:46.058-07:00voltdb = redis + sql interface? interesting:<h1 class="title">The Fast, Scalable Open-Source DBMS You'll Never Outgrow</h1> <p>Created by DBMS R&D pioneer, <strong><a href="http://voltdb.com/team/mike-stonebraker-cto-voltdb">Mike Stonebraker</a></strong>, VoltDB is a next-generation open-source DBMS that scales way beyond traditional databases, without sacrificing SQL or ACID for transactional data integrity. VoltDB is for database applications that support fast-growing transactional workloads and require: </p> <ul><li>Orders of magnitude better performance than conventional DBMS </li><li>Linear scalability </li><li>SQL as the DBMS interface </li><li>ACID transactions to ensure data consistency and integrity </li><li>High availability 24x7x365</li></ul><br /><br /><a href="http://hootsuite.com/dashboard#" class="_username username _userInfoPopup" title="igrigorik">#igrigorik</a> voltdb = redis + sql interface? interesting: <a href="http://bit.ly/al9XiF" target="_blank" rel="nofollow">http://bit.ly/al9XiF </a><br />Official link <a href="http://bit.ly/al9XiF" target="_blank" rel="nofollow" class="_previewLink preview icon"></a><a href="http://voltdb.com/">http://voltdb.com/</a><span class="icon-13"></span>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com2tag:blogger.com,1999:blog-8708845000236411085.post-83557009783802601602010-05-26T22:59:00.000-07:002010-06-30T05:13:41.209-07:00Java, JEE, JavaFx and more: A graphical counter on GAEJ (Google App Engine for Java) using Images API Service<div class="post-body" id="post-2380852336356010668"> <style>#fullpost{display:inline;}</style> <p>Images Service on GAEJ provides the ability to manipulate images, thus you can composite multiple images into a single one. I'll use this possibility to display a graphical hit counter. This tutorial is only a kind of how-to. I'm sure you can write real programs using the instructions given in this post. For simplicity reasons error handling are reduced to the minimum.<br /><br />The idea is pretty simple, persist a counter using Memcache or DataStore, have digits from 0 to 9 as PNG images, read images as bytes, make images and composites of these images, put all composites in a List and finally use this List to get the composed image.</p><p><br /></p></div><br /><a href="http://www.java-javafx.com/2010/05/graphical-counter-on-gaej-google-app.html">Java, JEE, JavaFx and more: A graphical counter on GAEJ (Google App Engine for Java) using Images API Service</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-56445461690719465502010-05-24T07:42:00.000-07:002010-05-24T07:42:40.516-07:00Search Results: All on USTREAM, Most Views listings, All entries, page 1 of 1, 05/24/10.<ul class="smallThumbList"><li><div> <h3><a href="http://www.ustream.tv/recorded/7091178">Montly.info: dall'idea al design dell'interfaccia mobile, step by step</a></h3> Total Views: 26 | Length: 44:06 <br /> A <a href="http://www.ustream.tv/discovery/recorded/technology-computers">Computers</a> recorded video from <a href="http://www.ustream.tv/channel/whymca-ct55">WhyMCA - Aula CT55</a><br /> <a href="http://www.ustream.tv/discovery/recorded/all?broadcast=4247272">(find more from this show)</a> </div> <br /> </li><li> <a href="http://www.ustream.tv/recorded/7090488" style="float: left;" class="thumbnail" title="Android Sensor"> <img style="top: 50%; margin-top: -33px;" src="http://static-cdn1.ustream.tv/videopic/0/1/7/7090/7090488/1_4247151_7090488_90x68_b_1:2.jpg" alt="Android Sensor" /> </a> <div> <h3><a href="http://www.ustream.tv/recorded/7090488">Android Sensor</a></h3> Total Views: 21 | Length: 37:41 <br /> A <a href="http://www.ustream.tv/discovery/recorded/technology-computers">Computers</a> recorded video from <a href="http://www.ustream.tv/channel/whymca-castiglioni">WhyMCA - Sala Castiglioni</a><br /> <a href="http://www.ustream.tv/discovery/recorded/all?broadcast=4247151">(find more from this show)</a> </div> <br /> </li><li> <a href="http://www.ustream.tv/recorded/7092271" style="float: left;" class="thumbnail" title="Il paradigma cognitivo dei dispositivi touch screen"> <img style="top: 50%; margin-top: -33px;" src="http://static-cdn2.ustream.tv/videopic/0/1/7/7092/7092271/1_4247272_7092271_90x68_b_1:2.jpg" alt="Il paradigma cognitivo dei dispositivi touch screen" /> </a> <div> <h3><a href="http://www.ustream.tv/recorded/7092271">Il paradigma cognitivo dei dispositivi touch screen</a></h3> Total Views: 20 | Length: 40:58 <br /> A <a href="http://www.ustream.tv/discovery/recorded/technology-computers">Computers</a> recorded video from <a href="http://www.ustream.tv/channel/whymca-ct55">WhyMCA - Aula CT55</a><br /> <a href="http://www.ustream.tv/discovery/recorded/all?broadcast=4247272">(find more from this show)</a> </div> <br /> </li><li> <a href="http://www.ustream.tv/recorded/7093310" style="float: left;" class="thumbnail" title="Android Augemnted reality"> <img style="top: 50%; margin-top: -33px;" src="http://static-cdn1.ustream.tv/videopic/0/1/7/7093/7093310/1_4247151_7093310_90x68_b_1:2.jpg" alt="Android Augemnted reality" /> </a> <div> <h3><a href="http://www.ustream.tv/recorded/7093310">Android Augemnted reality</a></h3> Total Views: 19 | Length: 15:06 <br /> A <a href="http://www.ustream.tv/discovery/recorded/technology-computers">Computers</a> recorded video from <a href="http://www.ustream.tv/channel/whymca-castiglioni">WhyMCA - Sala Castiglioni</a><br /> <a href="http://www.ustream.tv/discovery/recorded/all?broadcast=4247151">(find more from this show) </a> </div> </li></ul><br /><br /><br /><br /><a href="http://www.ustream.tv/discovery/recorded/all?q=whymca">Search Results: All on USTREAM, Most Views listings, All entries, page 1 of 1, 05/24/10.</a> <br /><a href="http://www.ustream.tv/discovery/recorded/all?q=whymca"></a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-83848779039370966462010-05-24T07:26:00.000-07:002010-05-24T07:26:46.307-07:00First Look: H.264 and VP8 Compared - StreamingMedia.com<div class="deck">VP8 is now free, but if the quality is substandard, who cares? Well, it turns out that the quality isn't substandard, so that's not an issue, but neither is it twice the quality of H.264 at half the bandwidth. See for yourself, below.<p>To set the table, Sorenson Media was kind enough to encode these comparison files for me to both H.264 and VP8 using their Squish encoding tool. They encoded a standard SD encoding test file that I've been using for years. I'll do more testing once I have access to a VP8 encoder, but wanted to share these quick and dirty results.</p><br /></div><br /><a href="http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/First-Look-H.264-and-VP8-Compared-67266.aspx">First Look: H.264 and VP8 Compared - StreamingMedia.com</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-51231168394400838002010-05-24T07:24:00.000-07:002010-06-30T04:58:34.999-07:00Alex Gaynor -- PyPy is the Future of Python<p>Currently the most common implementation of Python is known as CPython, and it's the version of Python you get at <a class="reference external" href="http://python.org/">python.org</a>, probably 99.9% of Python developers are using it. However, I think over the next couple of years we're going to see a move away from this towards PyPy, Python written in Python. This is going to happen because PyPy offers better speed, more flexibility, and is a better platform for Python's growth, and the most important thing is you can make this transition happen.</p> <p>The first thing to consider: speed. PyPy is a lot faster than CPython for a lot of tasks, and they've got <a class="reference external" href="http://speed.pypy.org/overview/">the benchmarks to prove it</a>. There's room for improvement, but it's clear that for a lot of benchmarks PyPy screams, and it's not just number crunching (although PyPy is good at that too). Although Python performance might not be a bottleneck for a lot of us (especially us web developers who like to push performance down the stack to our database), would you say no to having your code run 2x faster?</p> <p>The next factor is the flexibility. By writing their interpreter in RPython PyPy can automatically generate C code (like CPython), but also JVM and .NET versions of the interpreter. Instead of writing entirely separate Jython and IronPython implementations of Python, just automatically generate them from one shared codebase. PyPy can also have its binary generated with a stackless option, just like stackless Python, again no separate implementations to maintain. Lastly, PyPy's JIT is almost totally separate from the interpreter, this means changes to the language itself can be made without needing to update the JIT, contrast this with many JITs that need to statically define fast-paths for various operations......<br /></p><br /><a href="http://alexgaynor.net/2010/may/15/pypy-future-python/">Alex Gaynor -- PyPy is the Future of Python</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-91426516680238301362010-05-21T00:01:00.001-07:002010-06-30T04:59:18.769-07:00Hosted SQL on App Engine For Business<h4>Later:</h4> <table width="600px"><tbody><tr><td> <b>Hosted SQL</b><br /> Dedicated, full-featured SQL servers available for your application. </td> <td width="250px"> Status: <span style="color:blue;">In Development</span><br /> Estimate: Limited Release in Q3 2010<br /></td></tr></tbody></table><br />Google Roadmap link : <a href="http://code.google.com/appengine/business/roadmap.html">http://code.google.com/appengine/business/roadmap.html</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-16706463695321592882010-05-20T23:49:00.000-07:002010-06-30T05:00:26.620-07:00Google and SpringSource join hands in the heavens<strong class="trailer">Google I/O</strong> Google and VMware's SpringSource arm have <a target="_blank" href="http://googlecode.blogspot.com/2010/05/enabling-cloud-portability-with-google.html">teamed up</a> to offer a series of development tools for building Java apps that can be deployed across multiple web-based hosting services. That includes Google's own App Engine, VMware-happy infrastructure services, and third-party services such as Amazon's Elastic Compute Cloud.<br /><br /><a href="http://www.theregister.co.uk/2010/05/19/google_teams_with_springsource/" target="_blank" rel="nofollow">http://www.theregister.co.uk/2010/05/19/google_teams_with_springsource/</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-44952673622515457672010-05-20T23:46:00.001-07:002010-06-30T05:00:51.848-07:00Google Launches Business Version Of App Engine; Collaborates With VMwareIt’s no secret that Google has been ramping up its enterprise offerings. The company has made a strong push for the adoption of Google Apps, launching the <a href="http://techcrunch.com/2010/03/09/google-apps-marketplace/">Apps Marketplace,</a> allowing Apps users to add other layers to their environments from companies like <a href="http://techcrunch.com/2010/03/09/socialwok-takes-a-stroll-in-the-google-apps-marketplace/">Socialwok</a> and <a href="http://techcrunch.com/2010/03/09/web-based-productivity-suite-zoho-finds-a-place-in-the-google-apps-marketplace/">Zoho.</a> Today, Google is taking it one step further. At Google I/O today, the search giant has <a href="http://googlecode.blogspot.com/2010/05/announcing-google-app-engine-for.html">announced</a> that <a href="http://www.crunchbase.com/product/google-app-engine">Google App Engine,</a> a platform for building and hosting web applications in the cloud, will now include a <a href="http://code.google.com/appengine/business/">Business version,</a> catered towards enterprises. The <a href="http://googleenterprise.blogspot.com/2010/05/buy-or-build-with-more-choice-for-your.html">new premium version</a> allows customers to build their own business apps on Google’s cloud infrastructure. Google is also announcing a collaboration with <a href="http://www.vmware.com/">VMware</a> for deployment and development of apps on the new cloud infrastructure.<br /><br /><a href="http://www.techcrunchit.com/2010/05/19/google-launches-business-version-of-app-engine-collaborates-with-vmware/">Google Launches Business Version Of App Engine; Collaborates With VMware</a><br /><br /><a href="http://sharethis.com/"></a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-17666858588368196072010-05-20T23:37:00.001-07:002010-06-30T05:09:56.536-07:00Scalable Work Queues with BeanstalkAny web application that reaches some critical mass eventually discovers that separation of services, where possible, is a great strategy for scaling the service. In fact, oftentimes a user action can be offloaded into a background task, which can be handled asynchronously while the user continues to explore the site. However, coordinating this workflow does require some infrastructure: a message queue, or a work queue. The distinction between the two is subtle and blurry, but it does carry important architectural implications. Should you pick a messaging bus such as AMQP or XMPP, roll your own database backed system such as BJ, go with Resque .....<br /><br /><br /><a href="http://www.igvita.com/2010/05/20/scalable-work-queues-with-beanstalk/">http://www.igvita.com/2010/05/20/scalable-work-queues-with-beanstalk/</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-58431674625127794392010-05-16T23:52:00.000-07:002010-06-30T05:07:09.577-07:00Pycon4 dal talk di Simone Deponti<h1 class="documentFirstHeading"><span style="font-size:100%;"><span style="text-decoration: underline;"></span><span class="" id="parent-fieldname-title">"Crogioli, alambicchi e beute dove mettere i vostri dati "</span></span></h1>Si parla di sqlalchemy orm e zodb, come gestire i dati<br />e quale scelte fare in funzione delle proprie necessità.<br /><br /><a href="http://open.abstract.it/it/blog/simone/crogioli-alambicchi-e-beute"></a><div style="width: 425px;" id="__ss_4097606"><a href="http://open.abstract.it/it/blog/simone/crogioli-alambicchi-e-beute"><strong style="display: block; margin: 12px 0pt 4px;"></strong></a><strong style="display: block; margin: 12px 0pt 4px;"><a href="http://www.slideshare.net/shywolf9982/crogioli-alambicchi-e-beute-dove-mettere-i" title="Crogioli, alambicchi e beute, dove mettere i "> </a></strong><object id="__sse4097606" height="355" width="425"><param name="movie" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cab-100514100301-phpapp02&stripped_title=crogioli-alambicchi-e-beute-dove-mettere-i"><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed name="__sse4097606" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=cab-100514100301-phpapp02&stripped_title=crogioli-alambicchi-e-beute-dove-mettere-i" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" height="355" width="425"></embed></object><div style="padding: 5px 0pt 12px;"><br />Per l'articolo completo dell' autore<br /><span style="font-size:78%;"><a href="http://open.abstract.it/it/blog/simone/crogioli-alambicchi-e-beute">http://open.abstract.it/it/blog/simone/crogioli-alambicchi-e-beute</a><a href="http://open.abstract.it/it/documentazione/repository/crogioli-alambicchi-e-beute"><br />http://open.abstract.it/it/documentazione/repository/crogioli-alambicchi-e-beute</a><br /></span><br /><br /><br /></div></div>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-71322922334563822722010-05-16T23:45:00.001-07:002010-06-30T05:04:45.116-07:00XtraDB / InnoDB internals in drawing<p>Source http://www.mysqlperformanceblog.com/ , Posted by <strong>Vadim</strong></p><p><strong><br /></strong> </p><p>I did some drawing exercise and put XtraDB / InnoDB internals in Visio diagram:</p> <p><a ca_clicked="0" href="http://www.mysqlperformanceblog.com/wp-content/uploads/2010/04/InnoDB_int.png"><img style="width: 627px; height: 340px;" src="http://www.mysqlperformanceblog.com/wp-content/uploads/2010/04/InnoDB_int2-e1272319507276.png" alt="" title="InnoDB_int" class="aligncenter size-full wp-image-2538" /></a></p> <p>The XtraDB differences and main parameters are marked out.</p> <p>PDF version is there <a ca_clicked="0" href="http://www.percona.com/docs/wiki/percona-xtradb:internals:start">http://www.percona.com/docs/wiki/percona-xtradb</a></p>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-75021420354290195662010-05-14T00:42:00.001-07:002010-05-14T00:45:46.927-07:00mia azienda funzionasse come un’organizzazione terroristicaUno spunto interessante in questo momento di crisi , utile "secondo me " anche per permettere agli individui di essere più creativi ed indipendenti magari creando cellule che si autoformano <br />in base hai progetti da affrontare<br /><br />Vorrei che la mia azienda funzionasse come un’organizzazione terroristica… « Meeting delle Idee - http://ow.ly/1KXOEFlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-10612399405842937742010-05-13T02:56:00.000-07:002010-06-30T05:04:17.377-07:00Richard Stallman arriva nella Marche, due gli incontri in Ancona<a href="http://feedproxy.google.com/%7Er/ziobudda/heiu/%7E3/Fs1cir6-FgQ/richard-stallman-arriva-nella-marche-due-gli-incontri-ancona">Richard Stallman arriva nella Marche, due gli incontri in Ancona</a>: "Ebbene sì, il nostro beniamino del movimento Open Source mondiale arriva anche nelle Marche e più precisamente in Ancona per ben due incontri:<br /><br />- giovedì 13 maggio ore 17:00 – presso l’assessorato all’informatizzazione del Comune di Ancona<br />- venerdì 14 maggio ore 10:30 – presso l’Aula A7/8 della Facoltà di Ingegneria dell’Università Politecnica delle Marche<p></p><img src="http://feeds.feedburner.com/%7Er/ziobudda/heiu/%7E4/Fs1cir6-FgQ" height="1" width="1" />"FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com2tag:blogger.com,1999:blog-8708845000236411085.post-63104327375937713252010-05-11T01:01:00.000-07:002010-06-30T05:06:00.605-07:00World Builder (high quality)<object style="background-image: url("http://i3.ytimg.com/vi/VzFpg271sm8/hqdefault.jpg");" height="295" width="480"><param name="movie" value="http://www.youtube.com/v/VzFpg271sm8&hl=en_US&fs=1"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed src="http://www.youtube.com/v/VzFpg271sm8&hl=en_US&fs=1" allowscriptaccess="never" allowfullscreen="true" wmode="transparent" type="application/x-shockwave-flash" height="295" width="480"></embed></object>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com0tag:blogger.com,1999:blog-8708845000236411085.post-24965398152228169802010-05-10T23:50:00.000-07:002010-06-30T05:06:49.517-07:00Amazon Web Services sign-up tutorial slide<div style="width: 425px;" id="__ss_4023630"><strong style="display: block; margin: 12px 0pt 4px;"><a href="http://www.slideshare.net/simone.brunozzi/amazon-web-services-signup" title="Amazon Web Services sign-up">Amazon Web Services sign-up</a></strong><object id="__sse4023630" height="355" width="425"><param name="movie" value="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=amazon-web-servicessign-up-100509024004-phpapp02&stripped_title=amazon-web-services-signup"><param name="allowFullScreen" value="true"><param name="allowScriptAccess" value="always"><embed name="__sse4023630" src="http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=amazon-web-servicessign-up-100509024004-phpapp02&stripped_title=amazon-web-services-signup" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" height="355" width="425"></embed></object><div style="padding: 5px 0pt 12px;"><br /></div></div>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com1tag:blogger.com,1999:blog-8708845000236411085.post-8032048351960565722010-05-10T12:46:00.000-07:002010-05-10T12:46:09.614-07:00Pixar American Mathematical Society<h1 class="headlineText"> Moving Remy in Harmony: Pixar's Use of Harmonic Functions</h1> <span id="pullQuote">This article will describe some new mathematical techniques being tested at Pixar for use in upcoming films...</span><br /> <span class="month"></span><br /><br /><a href="http://www1.ams.org/samplings/feature-column/fcarc-harmonic">American Mathematical Society</a>FlinkIthttp://www.blogger.com/profile/01450050846477221613noreply@blogger.com1