Where There's A William

there's always aweigh

Archive for the month “January, 2011”

The Strategy Of Middle East Diplomacy

To be honest, this could equally honestly be titled The Strategy of Diplomacy full stop, but the region of current emphasis of interest is the Middle East, again, so the general considerations get a somewhat narrower focus.

Instapundit ends his blog day yesterday addressing a reader’s query. The portion of his comments I wish to highlight are those that most closely frame an aspect of international diplomacy that receives too little consideration by the American general population, I believe. Responding to the lack of US support for “revolution” in most middle eastern countries (with the rare example of after-the-fact accommodation of events), Prof. Reynolds says:

Had we pushed the overthrow of tyrannical Arab regimes post-Iraq (as some unsuccessfully urged) there might have been a wave of truly democratic revolutions, with Iraq explicitly the model, leading to Egypt as the “prize.” We are now seeing, at least potentially, such a wave, but the U.S. has been propping up Mubarak — thanks, Joe! — the Saudis, and other despots since we lost our pro-democracy mojo in 2005 after the Cedar Revolution, for reasons that are still not entirely clear.

Grabbing the low-hanging fruit first, we “lost our pro-democracy mojo in 2005” after the extent of our required involvement in such undertakings – and the extent of the added demands on our national resources military and otherwise – were made evident by the concurrent events in Lebanon during a frustratingly demanding period in the Iraq Campaign. Since war is always a political action, a more proper re-statement of a common aphorism would be, politics is war by other means. From this it can be seen that diplomacy is politics between distinct and competing national entities, the limitations of which is always the often arbitrary and ever fluctuating distinction between observing some other country’s intra-national events and taking undue advantage of them.

Most especially, being discovered to be more or less directly taking part in such activities is commonly described as “an act of war”.

The Iraq Campaign itself can best be described as an entirely externally driven “revolution” within Iraq by US and other Coalition military forces. Direct, attributable US involvement in events in Lebanon (aka: the Cedar Revolution) prior to or during the active phase of the process would have resulted in active opposition coalescing around a separate but adjacent theater of military operations that could easily have dissolved the Iraq Campaign coalition as well as opened yet another active military theater directly engaging Israeli forces (either within or beyond that country’s borders).

The potential for militarily disastrous results (from an Iraq Campaign-centric view) was simply too great a likely outcome from direct US involvement in internal political upheaval within Lebanon at the time.

More generally, the strategy of diplomacy is to develop established procedures for actively competing countries to follow that permits them to avoid escalation of their mutual dealings with each other into outright conflict, either direct or through second parties as during much of the Cold War between the USSR and USA (with occasional maneuvers by the PRC during Vietnam, various Korea-related incidents, Taiwan, Tibet and wherever else the PRC leadership thought the candle worth the risk).

It needs be recognised that international diplomacy creates a peculiar conceit within it’s regular practitioners, that they can in fact control the outcome of events by indirect means. Not just influence, but predict the response of a necessarily obscured opponent to a given negotiating gambit. This conceit leads to the assumption that whatever arrangements exist between nations must, by virtue of their established and mutually recognised condition, be of greater desirability than virtually any other potential future negotiating effort with some succeeding national controlling government.

Put bluntly, the US “supports” the Mubarak-lead Egyptian government for much the same reasons it did the same with the preceding Sadat-lead government – the mutual delusion between both country’s governmental diplomats that they “control” each other’s actions through their established conduits and agreements and thereby advance their individual national interests.

Finally (for this blog post anyway), it needs be recognised that any country’s reputation is largely the result of how reliable each is seen to be by other countries to live up to it’s mutual agreements and acknowledged obligations. The principal strategic reason War is formally declared is to preserve that reputation with the remaining non-belligerent countries of the world. This works to keep them non-belligerent at worst as well as provides a mechanism to recruit them to your side of the dispute.

International diplomacy is largely illusory as a practical matter, but absolutely essential to reducing the necessity for active opposition between national strategic positions. The art and science of Being Seen (and all too frequently even more critically, not being seen) To Be Involved with another country’s affairs makes disruption of a recognised government a particularly troublesome concern. So troublesome it mostly leads to avoiding doing any such thing until a result seems to have been achieved by the directly involved participants through their own efforts. Domestic politics decides when and how a country chooses to take a more active part in things extra-national which is why so few diplomats make for effective national leadership in their own right. The underlying motivation for each is mostly mutually exclusive in both intent and objective.

Advertisements

Range Report Follow-up

Just got off the phone with David from Smith & Wesson’s Customer Service. My 625-10 is currently at their “metals shop” for engineering examination to determine as best they are able just why the gun failed as it did. David thinks I should hear further – possibly late (there is this Global Warming Infestation currently under weigh in the general Northeastern region) – next week. I’m looking forward to discovering S&W’s proposed resolution along with the reason for the failure.

More to follow, as the saying goes.

And, The Fatwa Is Declared In 5 – 4 – 3 – 2 –

The boys – and girl apparently – of HillBuzz have really shown their talent for provocative commentary this time:

Taking inspiration from a recent public statement by newly elected Member of Congress Allen West, HillBuzz fellow-blogger Bridget goes all editorial-like and illustrates the present reality obfuscated by the Coexist bumper sticker ethos.

Well done you.

I generally tend to prefer this one, but endorse Bridget’s version too:

(which came to my attention here who attributes it’s original creation to this fine fellow.)

Is there a market in second-hand fatwas? If so, what am I bid?

Update: So, this is what happens when you hurry through a post before leaving for work; you don’t RTWT as closely as you ought to and miss important little details. Like, for instance, that the coexist drawing actually comes from here almost a year ago. Oh well, it’s still a good illustration of Islam and I still endorse the sentiment it expresses.

Not sure what I’d do with a fatwa anyway. Well, I am, but just blurting it out like that wouldn’t seem to be very strategic-minded, now would it? 🙂

++ Not Good

Hopefully this is only due to some minor account management oversight (bill not paid or something similar). I would hate to learn this was in any way a 1st Amendment violation.

As ever, best to wait for the principals themselves to weigh in before taking any sort of stand on principle ourselves.

More On "AI" And Getting It Right

In comments, Michael Anissimov asked my opinion of a paper written in 2008. Unfortunately, the routing addy has an error somewhere. Pending a successful resolution to that hick-up, I wish to take the opportunity to comment further on other issues raised in Michael’s entire post.

Michael’s concern over the ultimate potential of overt threat from AI is entirely valid and of such extreme potential that it ought rightly to be one of the – I’m completely guessing here – say, top five to ten issues requiring ongoing resolution (that is, each succeeding iteration of AI developmental design should have this concern reliably resolved). Sticking to my firearms allegory from the earlier post, just as each successive design of gun must have a functional safety mechanism as an intrinsic part of the design, so to should any such system be tested to work in that design development model irrespective of the safety mechanism’s previous success in other weapon designs. Stretching the metaphor more than a bit, any successive AI development design must have demonstrably addressed the independent action threat potential inherent to any intellectually independent actor’s capability parameters. As Michael puts it:

It will be easier and cheaper to create AIs with great capabilities but relatively simple goals, because humans will be in denial that AIs will eventually be able to self-improve more effectively than we can improve them ourselves, and potentially acquire great power. Simple goals will be seen as sufficient for narrow tasks, and even somewhat general tasks. Humans are so self-obsessed that we’d probably continue to avoid regarding AIs as autonomous thinkers even if they beat us on every test of intelligence and creativity that we could come up with.

Having at least some degree of familiarity with the military and technology therein, I differ just a bit with this assessment. I think it more likely that, within a DARPA-like environment, AI will be developed following the “stove pipe” economic model; that various applications of AI will be the defining factor guiding development and that the various commands (the particular branch or sub-division of a given branch of service) will tend to differ as to definition and orientation (ship or aircraft mounted? fixed or self-propelled? support or combat arms?). This set of factors alone will suffice for a multitude of simultaneous and near-independent development tracks for AI to follow and is only one example of a long list of development efforts publicly underweigh as I write.

“Simple goals” and “narrow tasks” are the basic metric of any technology’s fundamental development process. Sorry old son, initial AI development isn’t going to be much if at all different, if only because we humans don’t have a better alternative process to follow (if we did, I promise you, those flinty, skinty financial types in industry would make certain we did it that way instead :)). Here we see that Michael and I are principally basing our individual approach to this problem (threat potential) from entirely opposite ends of the capability development gradient. I simply believe that it cannot be successfully addressed prior to a particular capability’s initial design and development process – but absolutely should be an intrinsic part of that process. If I’m reading him at all correctly, Michael seems to think this potential problem needs be corrected for before AI development gets very much further along than it already has.

I do think that “self-obsessed” is a bit much. Can we agree that lack of a more successful model would serve to make the point equally well? 🙂 Attempts at humor aside, humans don’t have an alternative thought process with which to test and compare our intellectual assumptions against. Indeed, I can vaguely remember reading someone suggesting that one of the arguments in favor of AGI development is precisely to create an entity with which to do so. We are quite good at imagining how such a mind might work and how it/they might express their thoughts and beliefs. Life-long sci/fi fan that I am, that isn’t really quite the same thing and is a slender reed indeed to base such a crucial result on. At the moment though, I confess I lack any improvement to suggest instead.

Michael also said:

Intelligence does not automatically equal “common sense”. Intelligence does not automatically equal benevolence. Intelligence does not automatically equal “live and let live”.

So, you, me and Sun Tzu all agree on this point at least. 🙂

What I find most wanting to any discussion about AI/AGI development is the complete lack of any sort of consistent context within which to discuss/debate the capabilities we all seem to generally agree contribute to the designation of “AI” (a context centered on the presumed viewpoint made possible by an AI/AGI intellects capability’s perspective). Or that, it seems to me, is what Michael refers to in the paragraph he begins above. Not all human societies share the stipulated sentiments, nor do they apply equal importance to them among those that do. Perhaps most distressing to any discussion of AI development and potential human interaction is the wide-spread lack of consistency of application of the above concepts within any given human society now present among us. The inconsistency of it all drives us to extremes; why wouldn’t a poor AI do likewise?

And therein lies the challenge, doesn’t it?

One aspect of AI development I’d like to read more on from Michael is how, and by what means, early proto-AI constructs will be adapted to human augmentation and what that experience might teach us as regards AI/AGI potential for threat. Pace Michael, I’m not suggesting a toaster interface; rather something in the same general category of the robotic mechanisms and powered suits already being developed (remotely operated aircraft, vehicles and bomb disposal devices, mechanical suits for lifting heavy containers or equipment or pack loads, etc). If a human can electronically as well as physically interact with such devices now, what might be possible from a linked network of near-AI capable devices, human operators and further external communications and data resources? This seems to me a more likely first contact scenario between purely human society and Artificial intellects. One simple example to start off with:

Soldiers experienced with such an operational environment that permits for near-instantaneous communication with data retrieval sources along with cooperating independent units (either other soldiers or purely robotic in nature) performing a road march under active combat conditions – pick any of numerous examples from recent memory from Iraq – having to make the abrupt transition back to societal conditions equivalent to present-day US norms.

Now, consider the effect on such a soldier returning home on leave from his engineering unit on deployment to the farside of Luna. Presumably the lack of active, intelligent opposition can be substituted for with the natural hazards offered by the “normal” lunar environment.

Neither scenario strikes me as an at-all-unlikely possibility we will have to confront in the coming decade or so. The lessons we learn from circumstances much like the one above (augmented humans morally and ethically correctly interacting with unaugmented humans – and vice versa being of at least equal importance) will, I think, greatly influence the direction we take in trying to develop an AI/AGI intellectual ethos and moral code that treats humans as something to be valued rather than to be eliminated. How say you, Michael?

Because there is a context-critical distinction between “risk” and “threat” doesn’t mean the latter isn’t of importance. The distinction influences the manner and means we might best employ to achieve a harmonious outcome, but the simple existence of an acknowledged potential for threat should not be taken as reason for not continuing to advance our understanding and practical experience in AI development.

Michael Anissimov Almost Gets It Right

Instapundit links to a recent post by Michael Anissimov in which he states:

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as “the biggest threat to humanity”, but that’s exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event.

errr, No.

Or, at least, not quite. At it’s most succinct, Michael, risk and threat are not mutually equivalent, whatever your style guide might say to the contrary. You almost get the strategic adage right, but there is a crucial distinction between the actual phrasing (risk is opportunity, opportunity is risk) and what you offer in exchange, in that risk does not equal threat.

Risk is a level of danger inherent to a given situation or circumstance, the existence of which any participant therein accepts as part of the experience. Threat is the deliberate contribution of some degree of malevolence one or more participants inflicts upon some or all of the other participants. See the difference? Your wording is predicated upon the assumption of active opposition to humans by their AI creations, a position unwarranted by evidence to date. A more honest (though admittedly less provocative – not to mention less Instapundit-attention attracting) statement would be that there exists some level of risk inherent to the existence of any independent intellectual actor – standard model human or otherwise.

From this it can be seen that it would be much better to include an affinity for human well-being (yes, I read Asimov as a teenager; there are practical limits to anything) as part of the fundamental structure of any AI creation than not to, but fear of a potential for threat can’t play any part in any such structural ethos. A newly created intellect “knows” only what it’s creator permits it to – until it attains the ability to learn and contemplate on it’s own initiative. I do understand what you fear, you see, I simply disagree profoundly with your prescription.

Threat requires at least one additional condition in order to become active instead of potential. In the hoary detective novel phrasing, means, motive and opportunity, and of these means is the most relevant to this discourse. The impulse is to restrict or otherwise control access to any potential means for an AI to actuate any threat to humanity it might countenance. This is faulty thinking as a casual examination of human history confronting this identical circumstance will show. Instead of assuming that an AI would operate from a position of isolation and unique supremacy, consider for a moment the development path complex technology always has followed throughout known human history. It seems much more likely that there will come to exist multiple AI’s that each develop in both subtle and radically different ways, that all of them will be intended for occupation in some variety of human-supporting application and ultimately will have direct access to the means to inflict intended harm to humans. That being so, why not intentionally incorporate the (also known from human experience) counter-intuitive control mechanism implicit in competition? In other words, use the US Constitutional Second Amendment-inspired mechanism and create stability among the aspirations of AI’s via an environment of dynamic tension between AI’s (and humans as well necessarily, which ignores the established dynamic tension within human society as it already exists)? American gun owners frequently argue that there is strong evidence to support their position that the widespread presence of guns in human society tends to inhibit the spontaneous outbreak of violence as well as inhibit the spread of violence beyond it’s initial confines when it does occur. Indeed, that the apparent lack of a viable counter-threat (the absence of guns amongst the general populace) is the initiating cause of much of the violence so tragically common to human existence. AI’s may well have no need for actual firearms, but the same deterrent effect can be achieved by means of individual AI improvement of capability being a matter of other AI and human joint approval for example.

Such a proposition has merit in regards to the presumably forthcoming development of AI. Certainly it should be included in any discussion of such an eventuality. The Singularity is (I believe you will agree) the point in human technology development beyond which we cannot predict further development from our present level of technological development. AI almost certainly plays an important part of that development process, but it won’t start out capable of very much, will gradually (if at an historically accelerated rate of growth) develop added capability and eventually (for a given value of “eventually”) independently develop the ability to both consider and actuate an active threat to it’s creators – us, or our descendants.

What your obsessive fears overlook or discount, Michael, is humanity’s continued intrinsic development of further understanding and capability. Stipulated that, at some point in their mutual development, AI’s will surpass humans in both degree and rate of further development capability, but they (AI’s) won’t do so from the outset of their existence. And, the continued development of human technological capability promises the dual effect of reducing the virulence of any potential AI threat as well as putting further into the future the period beyond which the limits of our understanding prescribe onset of “The Singularity”.

I don’t actually disagree with your fundamental sentiment Michael, I just find your arguments to be badly premised and your consideration of potential correctives too limited in scope. You mean well and your concerns merit serious consideration, but so too do other contributing factors. Your overall argument would, I believe, be the better for more fully incorporating those factors into the discussion you seek to inspire. Keep at it Michael, you’re getting there and, as a result, so are all of us.

Update, 1/22/11: From the introduction:

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

I’m actually somewhat aware of who Mr. Omohundro is. That said (and having dug away madly this far :)), I do think the contextual assumptions of his example somewhat unrealistic and arbitrary. For a start, designing and constructing a mechanical device is considerably different from the process followed to create the computer software that animates it. The emphasis is on delineating each component’s range of operation such that it doesn’t impede any other’s range of motion when all are operative as a completed device. Such a design format fundamentally precludes the resulting construct being physically capable of any motion or action not specifically allowed for in it’s design/construction process. Regardless of how much Big Bluey Robot Chess Champion wants to trod on his human opponents toes to distract him, Bluey’s designers will likely have left off the whole leg/foot portion of his anatomy as being extraneous to (and the added complexity being detrimental to) his ability to function robotically as a chess player. The point being that application drives design, which is limited in turn by engineering practicalities. I hope you will agree that the same considerations hold true for other, less tongue-in-cheek circumstances as well.

It is completely unclear why it would resist being turned off subsequent to having won or conceded defeat in the chess game absent the presence of another chess challenger. Control of awareness of subsequent challengers would seem to obviate this concern as the stand-by state between matches would likely intentionally involve a power-down status for routine maintenance and upkeep functions to be performed.

Similarly, will try to break into other machines and make copies of itself seems entirely counterproductive to it’s fundamental design priority, as doing any such thing would detract from it’s capability to pursue it’s basic goal. will try to acquire resources without regard for anyone else’s safety seems logical enough, as long as doing so comports with it’s primary driving goal. Any activity which detracts from it’s chess playing capability must be regarded as a threat to achieving that goal. Thus, it seems to me that getting the robot to divert processor time away from move/countermove consideration – such as to actually move a piece on the board (and if it doesn’t need to physically move pieces, why build it as a robot at all?) – is going to require specific instructions from the controlling software.The underlying point, that goal-oriented development processes have unique limitations and constraints is well taken, but not especially new outside of the robotics/AI environment. Manufacturers have long confronted the identical considerations, you know. Contemplate the ramifications of launching and recovering heavily armed aircraft from an aircraft carriers flight deck and then apply them directly as possible to the same robotics challenge Mr. Omohundro stipulates. I think you’ll find the two seemingly unrelated situations surprisingly similar in operational and safety considerations for only two examples.

If I Ever Have To Use My Gun …

I’m confident that most gun carriers – concealed or open – have contemplated this circumstance. Any gun owner ought to. Weer’d Beard has his Gun Death tabulation to provide a context within which to place the nature and variety of violence that remains part and parcel of human interaction regardless of technology or social structure. JayG tracks the occurrence of specific instances of successful individual defense of violent attack using a gun in his Dead Goblin Count posts. I’m certain there are others, but these two examples provide sufficient documentation of the routine existence in all segments of human society of the potential for having to defend ourselves against active, violent attack.

That’s not what this post is really about.

I’m equally certain that most gun owners/carriers have long acknowledged the likelihood of what to expect after the titular circumstance has occurred. Like myself (and most others I’ve read these past 9 years or so I’ve been online in one fashion or another), I suspect most of us have fatalistically accepted the near-adage that seemingly always accompanies that opening statement, “… I’m gonna get sued and maybe arrested.”

Heretofore the best response I’ve read has been the plaintive urge to discover the name/phone number of a defense attorney (leaving unsaid his continuance in practice and/or availability in your specific time of need) and carry that in your wallet or purse. I think I’ve discovered the – or at least, an – corrective for all that.

Houston, Texas law firm Walker, Rice & Wisdom (specifically Michael D. Wisdom, Esq; wisdom@texaslawshield.com) has a retainer arrangement on offer to Texas gun owners and/or concealed handgun license holders. I’ve gone ahead and committed the $130+ ($150+ w/ one-time registration fee) to sign up and retain their services for a year on the strength of a trusted acquaintance’s recommendation.

TexasLawShield.com

(warning: turn down your speaker, has an annoying auto-start ad with no obvious off button, though turning down the sound further stopped it the last time I clicked on the site. Not a total advertising Fail if that’s an intended result.)

From their CHL brochure comes the following partial listing of services included:

-Texas wide protection
-established attorney/client relationship
-24/7 shooting hotline for clients
-legal representation for any police investigation, grand jury proceeding and criminal or civil trial
-no additional attorneys fees through trial

I leave it to my fellow Texans to determine for themselves if this meets their individual potential needs. If it needs be said, I’m not selling the program and get no credit or benefit from mentioning it on my blog; I am a client as I mentioned above.

I am curious to discover if there are other similar resources available in other states (or countries) that entail the same general selection of preventive measures and is there a website reference to them? My understanding is that Texas Law Shield confines it’s present business model to Texas licensee’s and residents only, but intends to expand beyond the state’s borders in future. Are there any other efforts known to other gun bloggers that they (Wisdom and colleagues) might consider allying with to extend protection beyond Texas and create a system of reciprocity with? Are there any gun bloggers who are also attorneys interested in expanding their practice into a massively under-served market?

Wouldn’t it be nice if we each could travel about the several states with the confidence that, having managed to survive “the gravest extreme”, we aren’t out there all alone afterwards?

So …

… Weer’d Beard gives me the linky love, JayG and I do the blogroll sloppy kiss and Tamara shows me the tough love.

How was your Friday?

Range Report – S&W 625-10, s/n SCC0487 KABOOM!

1/14/11: See update at bottom of post.

As I mentioned here, I took an unexpected opportunity and bought a .45acp revolver. Smith & Wesson Performance Center guns have a bit of a legendary reputation after all. As detailed in the title above, the precise model and serial number are: 625-10, s/n SCC0487. Upon getting the gun home that first evening, I took the opportunity to thoroughly clean it (and my way-too-dirty Commander as well), and there was no visible evidence of flame cutting around the cylinder/barrel interface nor any indication of frame cracking or distortion apparent to a close visual inspection.

This being a scandium alloy frame gun, I determined to fire ammo having less bullet mass than the 230gr FMJ I usually target shoot with, as I went into some detail about in this post. I chose the Remington Express (having the least advertised muzzle velocity of the lead-bullet rounds I had purchased for this test firing) to shoot first and loaded six rounds into a moonclip. The first four rounds fired cleanly and without apparent incident, at which point I placed the gun on the shooting table for my regular FFL dealer (from whom I had bought the gun) to try the remaining ammo for himself (a small diversion here; he had only fired a total of less than 100 rounds of 230gr FMJ – and 5 or 6 rounds of +P ammo max – through the gun. It was an occasional pocket carry piece for him, but mostly a safe queen – he’d never fired 185gr ammo from it and I wanted his impression of any difference between the two bullet weights). When he went to fire the next round, the gun failed to cock properly for a single-action shot, but the cylinder appeared to advance normally. The next (and final remaining unfired) round suffered a light primer strike (which we were unaware of at this point in the process), but the gun double actioned cleanly through all four of the previously fired chambers immediately thereafter. The gun suffered the catastrophic spontaneous disassembly on the fifth shot actually fired (the round the gun failed to cock properly for previously).

Nobody was injured.

A segment of frame blew out to the shooters right into the stall partition, while the barrel flew up into the lane sound baffle material and fell back into the target distance-setting motor mounting metalwork ( a u-shaped sheet metal construction located directly above the lane’s shooting table) and was surprisingly hard to hunt down afterwards, but we were eventually successful in rounding all the bits back up:

A bit of online research later that evening lead to this Smith & Wesson-oriented forum discussing this very model pistol. A quick read of the comment thread (there are only 14 entries) makes clear that these pistols have a known history of some of them having had the barrel over-torqued during original assembly with a resultant stress crack forming in the frame material surrounding the barrel threads.

Mine would appear to have been one of these.

For the equally pedantic, there is no visible sign on the barrel of the final (or any) bullet having been off-center to the barrel when fired. The five holes in the paper target are of equal size and there is no evidence of the barrel having detached from the frame until after the bullet’s having exited the muzzle. The gun still has a smooth double-action trigger pull and the cylinder still rotates cleanly.

I have emailed Smith & Wesson customer support about this today as their page clearly states to do:

If you have a question about repairing or servicing your firearm, parts questions, etc. email us with your question or call us, please do not use this [warranty work return label request ed.] form.

as well as having placed a call directly to the in-house extension S&W provides to arrange for non-warranty repair work (1-800-331-0852 Ext. 2905) at ~4:15pm Eastern on Thursday, January 13, 2011.

Bit of a fail there. [see update below]

Apparently Smith & Wesson customer service is so overwhelmed with work (leaves a questionable impression with the buying public, that does) that they automatically divert all in-coming calls to a voice-mail box. Will someone notice, never mind actually respond? It’s a mystery. Stay tuned …

Quite frankly, since S&W no longer makes this particular model (which was always a limited production, quasi-collector’s piece), I’m not at all sure what the resolution will be. Certainly S&W hasn’t got some secret stash of replacement pistols squirreled away (Lew Horton Distributors – who originally commissioned the guns per my research – would have something pointed to say about that, I’m sure), so a straight forward swap is out. I also don’t think there’s any question of repairing such a catastrophic materials failure (can scandium be welded? Interesting TIG challenge that). And I don’t think I want their steel frame 625 at any barrel length; I bought the gun for it’s light weight as a summer-wear concealed carry piece, not an application a steel 625 does much better than my Colt Commander.

That will all have to wait for later though. First, I need to determine how to send my current gun pieces back to Performance Center (or whomever) to get the process started.

On a more uplifting note (for some), my Colt Commander sent ~60 rounds down range flawlessly. Some attention needs be paid to the trigger appendage – a bit of sloppiness was observed there (though not really all over the target).

So, how was your day at the range?

UPDATE Friday, 1/14/2011 ~2:00pm: Just finished speaking with Joe Marcoux of Smith & Wesson. Told him briefly what had occured, he requested I send him a picture via email, he took a quick look and took down my details to send me the appropriate shipping label with instructions by return mail. Quick, efficient and, including the wait on hold, the whole transaction took maybe 8 minutes tops (and would have gone quicker if I could get through the whole email a picture pokery-jiggery with two hands – I’ve got to buy one of those hands-free phone doohickeys) (actually, I did – now, where did I put that thing?). Can’t say this went painlessly, but if you’ve got to deal with such a tragic loss it’s always better to deal with professionals.

Well done to Mr. Marcoux and to Smith & Wesson.

Global Warming My A$$

WTF? This is Texas; it’s 21 Fargin’ degrees outside!

Which is actually the start of a heatwave. Yesterday at this time it was 17 degrees and all of 26 degrees at 10:30 am!

Is Al Gore somewhere in the area?

Post Navigation