Thursday, November 28, 2013

spreadsheet like line plot with filled areas in python

A nice way to show if a series of values fall within a certain, per sample in the series variable, range is to make a line plot with a shaded area indicating the range.

Example:



A line plot is easily made in a typical spreadsheet program. Getting the correct region shaded in the plot by combining area and line plots was however much too cumbersome for me. Therefore I had a look if matplotlib also supports area plots and luckily it does: fill_between. The only difficulty I encountered was that matplotlib is mainly intended for making scatter plots, i.e. a data series with meaningful x and y coordinates. A typical spreadsheet line plot however has text labels for all points on the x-axis (as shown in the example). The easiest solution I could come up with was to simply number the samples in the data series as 1, 2, 3 etc. and then change the ticks on the x-axis manually with the xticks function. If there are too many samples in the data series and the x-axis gets too full and labels start to overlap, most spreadsheet programs simply drop some labels. The code below does the same. The code seems a bit long but most lines are actually used for reading in the data from a csv file.

import csv
#!/usr/bin/env python
import csv

from pylab import *


violet=(90.0/255.0,36.0/255.0,90.0/255.0)
red=(1.0,0.0,0.0)
green=(0.0,1.0,0.0)


def plotFancy(fn, label,figNum=None,ymin=0.0,ymax=500.0,maxticks=20):
    """
    plots directly from a csv file (with a header row!!)
    file layout:
    column
    1: strip name
    2: Exp
    3: Mod
    4: min
    5: max
    
    use label to name y-axis
    if figNum is supplied the grap will be plotted in the figure with
    that number (and cleared first)
    ymin and ymax determine the scale on the y-axis (i.e. ylim(ymin,ymax))
    maxticks gives the maximum number of ticks (labels) allowed on the x axis
    """
    f=open(fn,'rb')
    reader=csv.reader(f,delimiter=';')
    xlabel_lst=[]
    y_min_lst=[]
    y_max_lst=[]
    y_mod_lst=[]
    y_exp_lst=[]
    line_cnt=0
    for line_lst in reader:
        line_cnt+=1
        if(line_cnt==1):
            continue
        xlabel_lst.append(line_lst[0])
        y_exp_lst.append(float(line_lst[1]))
        y_mod_lst.append(float(line_lst[2]))
        y_min_lst.append(float(line_lst[3]))
        y_max_lst.append(float(line_lst[4]))
    cnt_lst= [i for i in range(len(y_mod_lst))]
    f.close()
    if(figNum==None):
        figure()
    else:
        figure(figNum)
        clf()
    fill_between(cnt_lst,y_min_lst,y_max_lst,facecolor=green,alpha=1.0)
    plot(cnt_lst,y_mod_lst,'bo',color=violet,label="%s mod"%(label),ms=12)
    plot(cnt_lst,y_exp_lst,'mv',color=red,label="%s exp"%(label),ms=12)
    ylim(ymin,ymax)
    ylabel("%s"%(label))
    grid(b=True)
    legend(loc=9)
    show()
    if(len(xlabel_lst)>maxticks):
        delta=len(xlabel_lst)/float(maxticks-1.0)
        tick_num_lst=[]
        tick_text_lst=[]
        index=0
        for i in range(maxticks-1):
            index=int(i*delta)
            tick_num_lst.append(index)
            tick_text_lst.append(xlabel_lst[index])
        tick_num_lst.append(len(xlabel_lst)-1)
        tick_text_lst.append(xlabel_lst[len(xlabel_lst)-1])
        xticks(tick_num_lst,tick_text_lst,rotation=90)
        
    else:
        xticks(arange(len(xlabel_lst)),xlabel_lst,rotation=90)
    xlim(0,len(xlabel_lst))

Saturday, September 7, 2013

Reading/writing a python dictionary to file

To save time building a large dictionary every time I run my program I googled "saving a python dictionary to file". Of the suggested solutions I liked the option to write to a csv file best. However, the posted code did not work for me because the value in the dictionary was a very big nested list of lists and not a simple string. This was easy to fix by calling eval on the value obtained from the csv reader. Of course I was not the first one to realize this.

Below for completeness my code:

import csv

def saveDict(fn,dict_rap):
    f=open(fn, "wb")
    w = csv.writer(f)
    for key, val in dict_rap.items():
        w.writerow([key, val])
    f.close()
    
def readDict(fn):
    f=open(fn,'rb')
    dict_rap={}
    
    for key, val in csv.reader(f):
        dict_rap[key]=eval(val)
    f.close()
    return(dict_rap)

Monday, September 2, 2013

Creating a Cython extension type for use with multiProcessing for function fitting

If you have to fit a complex function to a very big data set it would be nice to be able to use all the cores your cpu has. Because the data set is very big it should be efficient to simply split the data set over a number of cores and calculate the total error sum (sum squared error) in parts in parallel. This sounds simple but it took me some effort to do this in python/Cython on both linux and windows. After googling for a while, I decided that using the multiProcessing module should work best for my specific situation (which contains a lot of python code which makes it difficult to turn the GIL temporarily off). On linux I had things running relatively fast, but on windows I could not get it to function. The difference is caused by the lack of real processes on windows (or at least they work differently). On a fork() in linux everything is copied but this does not happen on windows and you have to take care that all data is correctly passed to the child process (read "Explicitly pass resources to child processes" in the multiprocessing documentation).

To try things out I started with a simple example:

#!/usr/bin/env python
from multiprocessing import Process,Queue
import sys,numpy,pylab

class TFitFunc:
    def __init__(self,X0,x,y,pid=1):
        self.a=X0[0]
        self.b=X0[1]
        self.c=X0[2]
        self.x=x[:]
        self.y=y[:]
        self.pid=pid
        
    def __call__(self,X):
        self.a=X[0]
        self.b=X[1]
        self.c=X[2]
        errsum=0
        for i in range(100):
            ymod=self.a*self.x**2+self.b*self.x+self.c
            errsum+=numpy.sum((ymod[:]-self.y[:])**2)
        
        return(errsum)
        
        
class TFitFuncComplex:
    def __init__(self,X0,x,y,pid=1):
        self.a=X0[0]
        self.b=X0[1]
        self.c=X0[2]
        self.x=x[:]
        self.y=y[:]
        self.pid=pid
        
    def __call__(self,X):
        self.a=X[0]
        self.b=X[1]
        self.c=X[2]
        errsum=0
        for i in range(100):
            ymod=self.a*self.x**2+self.b*self.x+self.c+\
                numpy.sin(numpy.sqrt(self.x))*numpy.cos(self.x+0.5)\
                /self.a*numpy.sqrt(self.b)
            errsum+=numpy.sum((ymod[:]-self.y[:])**2)
        
        return(errsum)        
        
def f(fitfunc,X,Q=None):
    errsum=fitfunc(X)
    print "%d: errsum:%e"%(fitfunc.pid,errsum)
    if(Q!=None):
        Q.put(errsum)
    return(errsum)
    
def main():
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-10000000j]
    x2=numpy.r_[0:10:-10000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFunc(X0,x1,y1,1)
    f2=TFitFunc(X0,x2,y2,2)
    
    ps=[]
    for i in range(2):
        if(i==0):
            p=Process(target=f,args=(f1,X0))
        else:
            p=Process(target=f,args=(f2,X0))
        p.start()
        ps.append(p)
    for p in ps:
        p.join()
    
    
def main_complex():
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-1000000j]
    x2=numpy.r_[0:10:-1000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFuncComplex(X0,x1,y1,1)
    f2=TFitFuncComplex(X0,x2,y2,2)
    
    ps=[]
    Qs=[]
    errSum=0.0
    for i in range(2):
        Qs.append(Queue())
        if(i==0):
            p=Process(target=f,args=(f1,X0,Qs[i]))
        else:
            p=Process(target=f,args=(f2,X0,Qs[i]))
        p.start()
        ps.append(p)
    for i in range(2):
        errSum+=Qs[i].get()
        ps[i].join()
    print "Total errsum: %e"%(errSum)
    
    
def main_single_complex():
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-1000000j]
    x2=numpy.r_[0:10:-1000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFuncComplex(X0,x1,y1,1)
    f2=TFitFuncComplex(X0,x2,y2,2)
    errSum=f(f1,X0)
    errSum+=f(f2,X0)
    print "Total errsum: %e"%(errSum)

def main_single():
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-10000000j]
    x2=numpy.r_[0:10:-10000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFunc(X0,x1,y1,1)
    f2=TFitFunc(X0,x2,y2,2)
    f(f1,X0)
    f(f2,X0)
    
if __name__=='__main__':
    main_single()
    #~ main()
    #~ main_complex()
    #~ main_single_complex()

This works fine on linux and windows. However, this is pure python. Normally, I use a lot of Cython code in extension types (aka cython classes). And this I could not get to work without some more research. Now first my solution. The first part shows the .pyx file with two classes, one normal python class with some cython code in it and one real extension type. The second part shows the script using these classes.

TFitFunctions.pyx
#!/usr/bin/env python
import numpy as np
cimport numpy as np

class TFitFunc:
    """
    Simple demonstration class to be used as fit function.
    By definition of the __call__ member function an object of this
    class is callable (functor). All additional data required 
    to calculate the error sum should be passed to the constructor.
    The function here is simply
    y=a*x**2+b*x+c
    """
    def __init__(self,X0,x,y,pid=1):
        """
        Constructor. 
        Arguments:
        X0: list of initial values for the three model parameters [a, b, c]
        x: array of x values
        y: array of y values (typically experimentally determined data points)
        pid: optional "process id"
        """
        self.a=X0[0]
        self.b=X0[1]
        self.c=X0[2]
        self.x=x[:]
        self.y=y[:]
        self._pid=pid
    
    def pid(self):
        return(self._pid)
        
    def __call__(self,X):
        """
        Make objects of this class callable. The argument is a list/array
        of model parameter values [a,b,c]
        The function returns the sum squared error
        """
        cdef double errsum
        cdef int i
        self.a=X[0] #could also have used X directly in the calculation below
        self.b=X[1]
        self.c=X[2]
        errsum=0
        for i in range(100): #do this a hundred times to waste some CPU time
            ymod=self.a*self.x**2+self.b*self.x+self.c #calculate model values
            errsum+=np.sum((ymod[:]-self.y[:])**2) # calculate summed square error
        errsum/=100.0 
        return(errsum)
        
        
cdef class TFitFuncComplex:
    """
    Simple demonstration class to be used as fit function.
    Very similar to TFitFunc but with a more complex (and time consuming)
    function. Another big difference is that now the class is defined
    as an extension type. 
    
    By definition of the __call__ member function an object of this
    class is callable (functor). All additional data required 
    to calculate the error sum should be passed to the constructor.
    The function here is simply
    y=a*x**2+b*x+c+sin(sqrt(x))*cos(x+0.5)/(a*b*b)
    """
    cdef double a,b,c   #in an extension type class member variables must be defined here
    cdef int _pid
    cdef np.ndarray x,y
    
    def __init__(self,X0,np.ndarray[double, ndim=1]x,np.ndarray[double, ndim=1]y,int pid=1):
        """
        Constructor. 
        Arguments:
        X0: list/array of initial values for the three model parameters [a, b, c]
        x: array of x values
        y: array of y values (typically experimentally determined data points)
        pid: optional "process id"
        """
        self.a=X0[0]
        self.b=X0[1]
        self.c=X0[2]
        self.x=x[:]
        self.y=y[:]
        self._pid=pid
        
    def pid(self):
        return(self._pid)
        
    def __call__(self,X):
        """
        Make objects of this class callable. The argument is a list/array
        of model parameter values [a,b,c]
        The function returns the sum squared error
        """
        cdef double errsum
        cdef int i
        self.a=X[0] #could also have used X directly in the calculation below
        self.b=X[1]
        self.c=X[2]
        errsum=0
        for i in xrange(100):#do this a hundred times to waste some CPU time
            ymod=self.a*self.x**2+self.b*self.x+self.c+\
                np.sin(np.sqrt(self.x))*np.cos(self.x+0.5)\
                /self.a*np.sqrt(self.b) #calculate model values
            errsum+=np.sum((ymod[:]-self.y[:])**2) # calculate summed square error
        errsum/=100.0 
        return(errsum)
        
   
    def __reduce__(self):
        """
        Without this function the code will not run with multiProcessing
        on Windows.
        It has something to do with making an extension type
        pickable. For a normal python class this is not required
        (see TFitFunc)
        """
        return TFitFuncComplex, ([self.a,self.b,self.c],self.x,self.y,self._pid)

calling script:

#!/usr/bin/env python
from multiprocessing import Process,Queue
import sys,numpy
from TFitFunctions import TFitFunc,TFitFuncComplex

"""
Demonstration of use Process with a callable Python/Cython classes
Can be easily extended into a real multiProcessing fitting
setup for use with (e.g.) fmin

The basic idea is that you have a huge amount of data points
that must be evaluated for the calculation of the summed squared error.
These data points are independent by nature and thus ideal for
parallel processing.
"""
      
        
def f(fitfunc,X,Q=None):
    """
    Function to be passed to Process as target. The function
    will call fitfunc with X as argument and put the result in the
    Queue Q is one is passed as argument.
    """
    errsum=fitfunc(X)
    print "%d: errsum:%e"%(fitfunc.pid(),errsum)
    if(Q!=None):
        Q.put(errsum)
    return(errsum)
    
def main():
    """
    Demonstration of use of TFitFunc
    """
    a=1.0 #model parameters
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-10000000j] #data points for first process (x) 
    x2=numpy.r_[0:10:-10000000j] #measurement points for second process (x)
    
    y1=a*x1**2+b*x1+c  #"measured" data for process 1
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5) #add some noise
    y2=a*x2**2+b*x2+c #"measured" data for process 2
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5) #add some noise
    X0=[0.5,2.5,1.5] #initial guess for model parameters
    f1=TFitFunc(X0,x1,y1,1) #Create fit function for process 1
    f2=TFitFunc(X0,x2,y2,2) #Create fit function for process 2
    
    ps=[] #to contain process
    Qs=[]
    errSum=0.0
    for i in range(2):
        Qs.append(Queue())
        if(i==0):
            p=Process(target=f,args=(f1,X0,Qs[i])) #create process 1
        else:
            p=Process(target=f,args=(f2,X0,Qs[i])) #create process 2
        p.start() #start process
        ps.append(p) #add process "handle"
    
    for i in range(2):
        errSum+=Qs[i].get() #collect error sums from processes
        ps[i].join() #wait for process to finish
    print "Total errsum: %e"%(errSum)
    

    
def main_complex():
    """
    Same as main but now with TFitFuncComplex
    """
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-1000000j]
    x2=numpy.r_[0:10:-1000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFuncComplex(X0,x1,y1,1)
    f2=TFitFuncComplex(X0,x2,y2,2)
    
    
    ps=[]
    Qs=[]
    errSum=0.0
    for i in range(2):
        Qs.append(Queue())
        if(i==0):
            p=Process(target=f,args=(f1,X0,Qs[i]))
        else:
            p=Process(target=f,args=(f2,X0,Qs[i]))
        p.start()
        ps.append(p)
    for i in range(2):
        errSum+=Qs[i].get()
        ps[i].join()
    print "Total errsum: %e"%(errSum)
    

    
def calcErrorSum(X,fitfuncs):
    """
    Calculate error sum for given model parameters (X)
    by using the functions in fitfuncs in (parallel) processes
    """
    ps=[]
    Qs=[]
    errSum=0.0
    for i in range(len(fitfuncs)):
        Qs.append(Queue())
        p=Process(target=f,args=(fitfuncs[i],X,Qs[i]))
        p.start()
        ps.append(p)
    for i in range(2):
        errSum+=Qs[i].get()
        ps[i].join()
    print "Total errsum: %e"%(errSum)
    return(errSum)
    
def main_complex2():
    """
    Same as main_complex but now with data creation part seperated
    from function evaluation part. PLease note that
    the function calcErrorSum can be used as a (multiProcessing)
    argument to (e.g.) fmin
    """
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-1000000j]
    x2=numpy.r_[0:10:-1000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFuncComplex(X0,x1,y1,1)
    f2=TFitFuncComplex(X0,x2,y2,2)
    fitfuncs=[f1,f2]
    
    calcErrorSum(X0,fitfuncs)
    
        
    
def main_single_complex():
    """
    Same a main_complex but now serial evaluation (for time comparison purposes)
    """
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-1000000j]
    x2=numpy.r_[0:10:-1000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFuncComplex(X0,x1,y1,1)
    f2=TFitFuncComplex(X0,x2,y2,2)
    errSum=f(f1,X0)
    errSum+=f(f2,X0)
    print "Total errsum: %e"%(errSum)

def main_single():
    """
    Same a main but now serial evaluation (for time comparison purposes)
    """
    a=1.0
    b=2.0
    c=3.0
    x1=numpy.r_[0:10:-10000000j]
    x2=numpy.r_[0:10:-10000000j]
    
    y1=a*x1**2+b*x1+c
    y1+=y1*0.35*(numpy.random.random(len(x1))-0.5)
    y2=a*x2**2+b*x2+c
    y2+=y2*0.35*(numpy.random.random(len(x2))-0.5)
    X0=[0.5,2.5,1.5]
    f1=TFitFunc(X0,x1,y1,1)
    f2=TFitFunc(X0,x2,y2,2)
    f(f1,X0)
    f(f2,X0)
    
if __name__=='__main__':
    #~ main_single()
    #~ main()
    #~ main_complex2()
    main_complex()
    #~ main_single_complex()

The code in TFitFunctions.pyx shows that the trick for an extension type is to add the __reduce__ method to make it pickable. This is only needed on windows.

Sunday, May 26, 2013

cython wraparound problems

For performance reasons I was including the line

#cython: wraparound=False

in most of my Cython files. After an update to a newer versions of Cython (0.19) my Cython scripts started to crash. It took me some time to find out that combined with the wraparound=False line you cannot use [-1] as the index for the last element anymore (this is also written somewhere in the Cython documentation).

Saturday, May 18, 2013

Allowing only a single instance of a program to open using Qt

Many text editors open a new tab and not a new editor window when a second (etc. etc.) document is opened. For my program that displays curves of simulation results I wanted to have similar behaviour. This program is written in Qt using Qwt (also see this earlier post). This program already supports drag and drop and file association (i.e. double clicking on a file to open it with this program). To prevent a second program from opening when a second file associated with this program is double clicked I only had to follow this example.

Monday, March 25, 2013

Python profiling trials

Python comes with some default profiling capabilities. However, as also noted here, this profiling can slow down your program to a crawl making the gathering of useful statistics very tedious. Although it is very easy to get the proposed alternative "plop" running the output it generates is very limited and I did not find a description to get a more traditional output from this profiler. Although statprof does generate traditional output without slowing down running your program very much, it does not really seem to deal correctly with wrapper functions which severely limits its applicability to my typical programs. Thus, unfortunately, I seem to be stuck with using CProfile for a while.

A nice introduction to CProfile is given on slippen's blog.

To turn on profiling of Cython code add the line

#cython: profile=True

to the top of your .pyx file (including the hash).

Sunday, March 10, 2013

using SWIG to connect python and a c++ dll

Although I am a great fan of Cython when you already have a class defined in C++ code you do not want to rewrite it for Cython. Already a long time ago I once used SWIG to create a link between C++ classes and python. This is fairly easy as long as you have clear interface functions which you can use to pass data to and from your class. Nevertheless, it took me some effort to get things running for my new class.

Following the SWIG tutorial I started with something like this myExample.i
%module myExample
 %{
 /* Includes the header in the wrapper code */
 #include "myModule.h"
 #include "myModule2.h"
 %}
 
 /* Parse the header file to generate wrappers */
 #include "myModule.h"
 #include "myModule2.h"

After compiling this I got a module that I could import in Python. However, the module did not contain the classes I had defined in my C++ code. Because there are not so many clear examples of SWIG and C++ I could not really find an explanation for this. Only after recreating the exact example of the tutorial (using copy/paste from the web) I found that I made a simple typing error: the #include in the second part of the myExample.i should be %include!

%module myExample
 %{
 /* Includes the header in the wrapper code */
 #include "myModule.h"
 #include "myModule2.h"
 %}
 
 /* Parse the header file to generate wrappers */
 %include "myModule.h"
 %include "myModule2.h"

Now everything works nicely. Compiling is not very difficult. I have setup a Makefile for this:

CPP=g++

CFLAGS=  -Wunused  -O3

PYTH_INCL = /c/Python27/include
PYTH_LIB = /c/Python27/Libs

PYLIB = -lpython27

OBJS= myModule.o
modOBJS= myModule2.o
SRC_DIR = ..
all: myExample_mod

myExample_mod: $(modOBJS) myExample.i
 swig -I.. -python -c++ -shadow myExample.i
 $(CPP) -I$(PYTH_INCL) $(INCLUDE) -c myExample_wrap.cxx
 $(CPP) -shared -L$(PYTH_LIB) myExample_wrap.o $(OBJS) $(PYLIB) $(LIBS) -o _myExample.pyd

%.o: $(SRC_DIR)/%.cpp $(SRC_DIR)/%.h
 $(CPP) -c $(CFLAGS) $(INCLUDE)  $< -o $@
With this Makefile you can use import myExample in your python code to import the module. Thus far I have only used simple interface methods in my classed that work with integers and floats as input and output variables. This makes communication between the module and python very simple (no special effort seems to be required). I assume that if you want to pass arrays, strings, etc. things will become more difficult.

Wednesday, March 6, 2013

cython -mno-cygwin problems

The command I use to compile a cython extension on windows:

python setup.py build_ext --inplace --compiler=mingw32

(see also these two posts) has problems with newer mingw/cygwin installations. Already for some to the compiler option -mno-cygwin is no longer used and the mingw compiler should be called directly. When you run the setup.py script with the compiler flag as shown, however, python automatically adds the -mno-cygwin flag. This then stops the compiler. The best solution for this I have found so far is to change the

c:\Python27\Lib\distutils\cygwinccompiler.py

file. Simply remove the -mno-cygwin option everywhere in the definition of the Mingw32CCompiler class (just search for no-cygwin in this file and you will find it).

Looks like I also could have googled this instead of finding out again how to use grep to search in subdirectories which is not as simple as adding the -r option ( find ./* -type f -exec grep -l "no-cygwin" {} \;).

Friday, February 22, 2013

Location drive specification MinGWPortable

A long time ago I found a portable version of MinGW somewhere on the web. It is sometimes very convenient to have a gcc compiler ready to run on a computer that is not your own. However, I used it mainly on one computer and when I finally did try to run it on a different machine it didn't work. The problem was that it was looking for the MinGW and msys stuff on the wrong drive. Although not very portable it is easy to change this as long as you remember that msys emulated a unix environment! Just change the drive in

/etc/fstab

It took me some time to find this. But instead of searching myself I could have looked here.

Somehow I am no longer able to find my original MinWGPortable on the web anymore. That is a pity because the msys terminal is extremely nice: you can resize it in any direction! Perhaps I should have another look here.