Caano Geel

Nomads
  • Content Count

    1,812
  • Joined

  • Last visited

Everything posted by Caano Geel

  1. ^^ no no dear, i think you're mixing it up with this. .. and yes that is snake wine. talking of ng maybe he's gone to VAL-erie's rescue! Sherban Shabeel, is this by any chance your friend during happier times?
  2. Man bites snake in epic struggle BBC News. Wednesday, 15 April 2009 12:29 UK A Kenyan man bit a python which wrapped him in its coils and dragged him up a tree during a fierce three-hour struggle, police have told the BBC. Police said the python involved in the attack was 13ft (4m) long The serpent seized farm worker Ben Nyaumbe in the Malindi area of Kenya's Indian Ocean coast at the weekend. Mr Nyaumbe bit the snake on the tip of the tail during the exhausting battle in the village of Sabaki. Police rescued Mr Nyaumbe and captured the 13ft (4m) reptile, before taking it to a sanctuary, but it later escaped. The victim told police he managed to reach his mobile phone from his pocket to raise the alarm when the python momentarily eased its grip after hauling him up a tree on Saturday evening. Mr Nyaumbe used his shirt to smother the snake's head and prevent it from swallowing him. His employer arrived with police and villagers, who tied the python with a rope and pulled them both down from the tree with a thud. Peter Katam, superintendent of police in Malindi district, told the BBC News website: "Two officers on patrol were called and they found this man was struggling with a snake on a tree. "The snake had coiled his hands and was trying to swallow him but he struggled very hard. The officers and villagers managed to rescue him and he was freed. "He himself was injured on the lower lip of the mouth - it was bleeding a little bit - as the tip of the snake's tail was sharp when he said he bit it." Mr Nyaumbe told the Daily Nation newspaper how he resorted to desperate measures after the python, which had apparently been hunting livestock, encircled his upper body in its coils. "I stepped on a spongy thing on the ground and suddenly my leg was entangled with the body of a huge python," he said. "I had to bite it." Supt Katam told the BBC the officers had wanted to shoot the snake but could not do so for fearing of injuring Mr Nyaumbe. "If it wasn't for the villagers and officers who helped him, he would have been swallowed by the snake over the Easter holiday," said Supt Katam. He added: "It's very mysterious, this ability to lift the man onto the tree. I've never heard of this before." The police officer said they took the snake to a sanctuary in Malindi town but it escaped overnight, probably from a gap under the door in the room where it was kept. "We are still seriously looking for the snake," said Supt Katam. "We want to arrest the snake because any one of us could fall a victim."
  3. thank you P. much appreciated. I did manage to set up websvn .. though i still can't get it to do exactly what i'm after. So joomla it is then, i'll set it next week and report back thanks again
  4. ^ sorry, nope, can't magic books
  5. ^I realise this is not interesting for most peepz but how shall i put it .. the more socially challenged amongst us might like to know. anyhow, another google term. that i use daily is "define", i.e. you can ask google to define terms for you, e.g. typing "define: geek" into the google search box gives a list of dictionary def i.e.: # a carnival performer who does disgusting acts # eccentric: a person with an unusual or odd personality wordnet.princeton.ed u/perl/webwn # Geek! is the second EP by My Bloody Valentine, and their first to feature bass player Debbie Googe. It was released in December 1985. ... en.wikipedia.org/wik i/Geek! ... anyhow for those of you that use a shell often - specifically bash, the following function in your .bashrc config file is very useful. code: # Define a word # written by Andrew V. Newman # last modified Fri Mar 5 07:27:17 MST 2004 # Shell script to bring up google based definitions for any compound or simple word # - USAGE: ## define geophysics ## define shell script ## define bash function define () { # Checks to see if you have a word in mind or just typing in the command. If the latter, # it will give you a short usage and example before exiting with error code 1. if [ ${#} -lt "1" ]; then echo "Usage: `basename $0` 'TRM' " echo " where TRM is the word that you would like defined" echo " Examples:" echo " `basename $0` Fedora" echo " `basename $0` shell script" echo " or:" echo " `basename $0` google" exit 1 fi # Use 'resize' to correctly set the environmental variables for your terminal # (this may not work for all terminals). eval `resize -u` # Set the lynx program to your current term width (doesn't do it by # default if being piped directly into another file), and turn off the # default justification (can make output UGLY). LYNX="/usr/bin/lynx -justify=0 -width=$COLUMNS" # Set your 'more' or 'less' favorite pager. PAGER=/usr/bin/less # Sets a URL BASE assuming multiple variables as a compound word. The # way WORD is defined, it will replace all the blank spaces with the URL # equivalent of a blank '%20'. WORD=` echo ${@} | sed 's/ /%20/g'` #echo $WORD # Define the google URL to search for the definition. URL=http://www.google.com/search?q=define:$WORD # Call up the google search in lynx and dump the output to a temp file # and all stderr to /dev/null . $LYNX $URL -dump 2> /dev/null >define.tmp # Displays definition after stripping off unnecessary text before and # after the definitions (the first sed command only prints lines # between the line containing 'Definition' and the line containing # '_____', inclusive. Then it pipes it through a second sed command # which replaces the 'lynx' URL numbers with 'WEB:', before piping it # through a third sed command that wipes out that last line. Finally # the information is piped to a pager command which lets you page # through the text on-screen with the space bar. sed -n '/[dD]efinition/,/_____/p' define.tmp | sed '/[[0-9][0-9]]/s// WEB: /' | sed '/____/d' | $PAGER # Remove temporary file. rm -f define.tmp } obviously you need to install the command line browser "lynx". if you use ubutun/debian, just type: "aptitude install lynx" at the command line and that will install it. what it does do? you ask lets say your in the terminal and you want to look for the definition of a term .. etc.. you just type "define " and you get output as below: how does it work? 1. it calls google with "http://www.google.c om/search?q=define:$ WORD", where word is the term you typed. the results are dumped to a file called "define.tmp" 2. we call "sed" to strip off everything before "Define" in the returned page, i.e. all the related stuff that google returns. anyhow, the point is that those interested can adapt this to do things more relevant to your interests.
  6. ok seriously .. how in the world do people actually follow instructions for setting up web things.. in all honesty i'm half tempted to write my own apps to do the 'effin job.. case in hand, read the following paragraph and see if you can decipher what it means for dummies like me! Installation Your dedicated svn sub-domain will not serve WebSVN. It's reserved for svn by DreamHost's dedicated Apache2 process. Install WebSVN in a standard sub-domain, and protect it if necessary with a .htaccess file, then configure it to point to a remote repository at your Dreamhost SVN location. http://wiki.dreamhos t.com/WebSVN
  7. dear peepz in the know, i need your help! i know nothing about websites/content management and all those complicated things.. actually, that is an exaggeration, I know less than nothing! I own a domain, with space and etc, and i want the following: 1. a simple content management type thing- as in i tell it to do that, it asks no questions and does it.(i.e. i've hated rich text edits of wordpress in the past they never did what you tell them and wouldn't render html and blah) 2. private calendar, that i can share with peepz and synchronise with my laptop and phone 3. mail - i want to host my own email - imap preferably and to be able to download it my laptop and etc. 4. subversion support -- for my work -- also this must support public and private profiles my webhost offers the following one click installs (ha! like that helps when you have no idea) http://wiki.dreamhos t.com/One_Click_Inst alls -- can anybody please help and suggest the right cobbling up of free software to achieve to do these things? muchos gracias
  8. adnaan, explain what evils this funkiness will be used for and i'll link you the other pdf
  9. I just finished reading and highly recommend Maus by Art Spiegelman. A very moving tale of a very Somali Jewish father. makes me wonder how long before we start dealing with our own past and current situation..
  10. ^ you need python (http://www.python.o rg/) installed.. the libs "os, socket, sys, re, urllib, urllib2" are all part of the basic install and you run in from the command line. Since we are not creating any raw sockets, this doesnt need any special permissions. if you have a *nix or mac its straight forward. follow the instructions on the here. wrt. python, yes it looks a lot like C++, they are both C derivatives. But python is much simpler to program with and great for quick hacks/prototyping and etc.. wrt. join, it is used to concat lists
  11. p.s. from the "def find_music(self, term, results_to_search='1 0', strict=True): " function, the query sent to google for "nina simone" is: code: vars= -inurl(htm|html|php|shtml|asp|jsp|pls|txt|aspx|jspx) intitle:index.of "last modified" description size (mp3|m4a|wma|wav|ogg|flac) term = nina simone these are joined and sanitised for http consumptions as below - the results of join variable, are used to generate the actual query that is sent code: join = -inurl:(htm|html|php|shtml|asp|jsp|pls|txt|aspx|jspx) intitle:index.of "last modified" description size (mp3|m4a|wma|wav|ogg|flac) nina simone query = -inurl%3A%28htm%7Chtml%7Cphp%7Cshtml%7Casp%7Cjsp%7Cpls%7Ctxt%7Caspx%7Cjspx%29+intitle%3Aindex.of+%22last+modified%22+description+size+%28mp3%7Cm4a%7Cwma%7Cwav%7Cogg%7Cflac%29++nina+simone
  12. this is a very general hack, and the concept forms the base of a script that i use for generic media searches (original below and not written by me) .. you can modify the file types it searches and i suggest you add a pause to the search and get phase i.e. show the sites found and ask the user to continue details are here http://noflashlight. com/ code: #!/usr/bin/env python """This script is published under the MIT License Copyright © 2008 Nick Jensen Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.""" import os, socket, sys, re, urllib, urllib2 socket.setdefaulttimeout(10) class SimpleWGET(urllib.FancyURLopener): ''' a simple class to download and save files ''' def __init__(self): urllib.FancyURLopener.__init__(self) urllib._urlopener = self def fetch(self, url, dst="."): fn = os.sep.join([dst, urllib.unquote_plus(url.split('/').pop())]) if os.path.isfile(fn): print "Skipping... %s already exists!" % fn return if sys.stdout.isatty(): try: urllib.urlretrieve(url, fn, lambda nb,bs,fs,url=url,fn=fn: self._reporthook(nb,bs,fs,url,fn) ) sys.stdout.write('n') except IOError, e: print str(e) except Exception, e: pass else: urllib.urlretrieve(url, fn) def _reporthook(self, numblocks, blocksize, filesize, url=None, fn=None): try: percent = min((numblocks*blocksize*100)/filesize, 100) except: percent = 100 if numblocks != 0: sys.stdout.write('r') if len(fn) > 65: third = round(len(fn) / 3) fn = '%s...%s' % (fn[:third],fn[third*2:]) sys.stdout.write("%3s%% - Downloading - %s" % (percent,fn)) sys.stdout.flush() def http_error_default(self, *args, **kw): print "Oops... %s - %s" % (args[2],args[3]) class Google: ''' search google and return a list of result links ''' @classmethod def search(class_, params, num_results='10'): valid_num_results = [ '5', '10', '25', '50', '100' ] if num_results in valid_num_results: # build the query url = 'http://www.google.com/search?num=%s&q=%s' % (num_results, params) # spoof the user-agent headers = {'User-Agent':'Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'} req = urllib2.Request(url, None, headers) html = urllib2.urlopen(req).read() results = [] # regex the results atags = re.compile(r'<a.*?>', re.IGNORECASE) href = re.compile(r'(?<=href=("|')).*?(?=1)', re.IGNORECASE) for link in atags.findall(html): if "class=l" in link: results.append(href.search(link).group()) return results else: raise ValueError( "num_results must be either %s" % ', '.join(valid_num_results) ) class MediaFinder(SimpleWGET): def __init__(self): SimpleWGET.__init__(self) def find_music(self, term, results_to_search='10', strict=True): ''' do a smart google search for music, crawl the result pages and download all audio files found. ''' audio_ext = ['mp3','m4a','wma','wav','ogg','flac'] audio_options = { '-inurl' : ['(htm|html|php|shtml|asp|jsp|pls|txt|aspx|jspx)'], 'intitle' : ['index.of', '"last modified"', 'description', 'size', '(%s)' % '|'.join(audio_ext)] } vars = '' for opt, val in audio_options.items(): vars = '%s%s:%s ' % (vars, opt, ' '.join(val)) query = urllib.quote_plus(' '.join([vars,term])) try: google_results = Google.search( query, results_to_search ) except urllib2.URLError, e: print "Error fetching google results: %s" % e i = 1 for url in google_results: print "Searching %s of %s pages." % (i, len(google_results)) if strict: self.download_all(url, audio_ext, term) else: self.download_all(url, audio_ext) i += 1 def download_all(self, url, extensions=['avi','divx','mpeg','mpg','wmv','flv','mp3','m4a','wma','wav','ogg','flac'], term=''): ''' download all files found on a web page that have an extension listed. (optionally match a term found in the URL) ''' media = re.compile(r'(?<=href=("|'))[^"']+?.(%s)(?=1)' % '|'.join(extensions), re.IGNORECASE) try: req = urllib.urlopen(url) html = req.readlines() req.close() found = [] for line in html: match = media.search(line) if match: link = match.group() if 'http://' in link: # absolute url found.append(link) elif link[0] == '/': # from web root root_url = url[0:url.find('/',7)] found.append(''.join([root_url,link])) else: # relative urls ../ ./ etc. link_dir = url[0:url.rfind('/')].split('/') for piece in link.split('../')[:-1]: link_dir.pop() filename = re.search('[^./].*$',link).group() link_dir.append(filename) found.append('/'.join(link_dir)) if not found: print "No media found..." else: print "Found %s files!" % len(found) for url in found: if not term or strip_and_tolower(term) in strip_and_tolower(url): self.fetch(url) except IOError, e: print "Error: %s" % e def strip_and_tolower(url): strip = re.compile('[^w]', re.IGNORECASE) return strip.sub('', urllib.unquote_plus(url.lower())) if __name__ == "__main__": finder = MediaFinder() args = sys.argv if len(args) in range(2,6): term = args[1] results_to_search = '10' strict = True if '-r' in args: results_to_search = str(args[args.index('-r')+1]) if '--non-strict' in args: strict = False try: finder.find_music( """%s""" % term, results_to_search, strict ) except: print "n" else: print """usage: %s "search terms" [-r (5|10|25|50|100) [--non-strict]]""" % args[0] if you need any help understanding the structure and etc .. gimme a shout
  13. Definition of Doublethink: Doublethink is the act of simultaneously accepting as correct two mutually contradictory beliefs. It is related to, but distinct from, hypocrisy and neutrality. ... i'll leave it at that
  14. ^ Then walaal rename the thread to encourage and welcome others to fill in for the rest of Somalia.
  15. Edit: Somali Singers & Composers Thread - History and Legacy unless of course you have identified new non-Somali people there? p.s. Cumar Dhuule was a family friend who lived in Muqdisho a few doors from my grandmother ...
  16. ^^ with regard to you, from what i can tell, a sex change and an affinity to giggle at the honourable sayid's bad jokes while leaning suggestively
  17. sayid, your pitch sounds more like pick-up line than a ... wot waz it?
  18. Congrats brother, lets hope she takes after her mother
  19. Sherban Shabeel, thats kaptain kaluun's younger brother, shehehehhehehehe will be able to tell you all 'bout them.. this i love right now
  20. ^ she's long gone but she'll never be forgotten
  21. much like the tom and jerry cartoons, it's always of mice and women for me
  22. this is style.. my first and last car (just imagine a smidgen more rust, back doors that dont' open and mysterious creaks) - her name was Farcuun, she was firey red and made doing 60 on the motorway feel like it would be the last ride of your days .. you rode like a bat outa-hell baby .. i still love ya