TWiki
>
LabioComp Web
>
DocuMentacoes
(revision 11) (raw view)
Edit
Attach
<strong><em>____________________________________________________________________________________________________________________________________________________________________________________________</em></strong> *1) Configurações do Ldap nos nós do Cluster* # apt-get install libnss-ldap libpam-ldap nscd<br /> <br /> LDAP server URI: ldap://<a target="_blank" href="http://192.168.1.2/">192.168.1.2/</a><br /> Distinguished name of the search base: dc=republica,dc=star,dc=wars<br /> LDAP version to use: 3<br /> LDAP account for root: cn=admin,dc=republica,dc=star,dc=wars <div id=":y4"> LDAP root account password: <vocês sabem qual né? Não vou colocar aqui><br /> Allow LDAP admin account to behave like local root? No<br /> Does the LDAP database require login?: No<br /> <br /> No arquivo /etc/ldap/ldap.conf adicione as seguintes linhas:<br /> <br /> BASE dc=republica,dc=star,dc=wars<br /> URI ldap://<a target="_blank" href="http://192.168.1.2/">192.168.1.2</a><br /> <br /> No arquivo /etc/nsswitch.conf modifique conforme as linhas abaixo:<br /> <br /> passwd: compat ldap<br /> group: compat ldap<br /> shadow: compat ldap<br /> <br /> # /etc/init.d/nscd restart<br /> <br /> Teste a conf:<br /> <br /> # id testeldap<br /> <br /> Deve retornar o seguinte:<br /> <br /> uid=2000(testeldap) gid=100(users) grupos=100(users)<br /> <font color="#888888"><br /> *Rafael Gomes<br /> Consultor em TI<br /> LPIC-1 MCSO* <br /> [[%SCRIPTURL%/edit/LabioComp/tel:(71) 8318-0284][(71) 8318-0284]]</font></div><div id=":y4"> </div><div id=":y4"> </div><div id=":y4"> </div><div id=":y4">________________________________________________________________________________________________________________________________________________________________________________________________ <br /></div><div id=":y4"> </div><div id=":y4"> *2) Configurações do SSH sem senha, proxy, NFS e torque:* </div><div id=":y4"> </div><div id=":y4"> </div><div id=":y4"> </div><div id=":y4"> *Vitor Vilas Boas vitorvilas@gmail.com* </div><div id=":y4"> </div><div id=":y4">11/16/11</div><div id=":y4"> </div><div id=":y4">to r2d2_ufba</div><div id=":y4">Galera, já está ok para meu usuário o SSH sem senha no cluster02 a partir do darthvader.</div><div id=":y4"><br /></div><div id=":y4">O que ainda falta fazer:</div><div id=":y4"><br /></div><div id=":y4">- Colocar o darthvader em todos os arquivos hosts.</div><div id=":y4">- Cada usuário criar sua chave no ssh do darthvader e criar o arquivo authorized_keys.</div><div id=":y4">- Modificar o fstab de todos os nós para que utilize o /home do darthvader e não do clustermaster como está setado.</div><div id=":y4"><br /></div><div id=":y4">Vou fazer isso hoje, até o final da tarde concluo.</div><div id=":y4"><br /></div><div id=":y4">Para os usuários fazerem a chave SSH, vou colocar aqui como fazer, e se alguém puder colocar na wiki (Gomex se puder...)</div><div id=":y4"><br /></div><div id=":y4">1 - Logar no darthvader com o próprio usuário;</div><div id=":y4"><br /></div><div id=":y4">2 - Gerar a chave com o comando --> ssh-keygen -t rsa</div><div id=":y4"><br /></div><div id=":y4">3 - Dar enter até terminar o processo, não colocando frase alguma, deixando a frase em branco.</div><div id=":y4"><br /></div><div id=":y4">4 - Entrar no diretório que estão localizadas as chaves ---> cd ~/.ssh</div><div id=":y4"><br /></div><div id=":y4">5 - Gerar o authorized_keys --> cat id_rsa.pub >> authorized_keys</div><div id=":y4"><br /></div><div id=":y4">6 - Fazer o teste logando no cluster02 --> ssh cluster02 (Caso precise, aceitar o certificado com yes) e refazer o teste.</div><div id=":y4"> </div> *Guilherme:* <font size="-1"><font face="sans-serif"> *Proxy:* <br /> add no /etc/profile das estações (internet)<br /> export http_proxy=http://192.168.1.1:3128<br /> <br /> *NFS* <br /> add no /etc/fstab (monta home NFS)<br /> _darthvader.bio.intranet.ufba.br:/home /home nfs rw,hard,intr 0 0_ <br /> <br /> *Torque nas estações:* <br /> /var/spool/torque/server_name > uma única linha com o nome do servidor "darthvader"<br /> /var/spool/torque/mom_priv/config > adicionar nome do servidor "darthvader"</font></font> <font size="-1"><font face="sans-serif"> *Solução para qualquer máquina que possa ter sido desligada incorretamente:* <br /> <br /> rm -rf /var/run/network$ rm -rf /var/run/network/mountnfs</font></font> <strong><em>__________________________________________________________________________________________________________________________________________________</em></strong> *3) Arquivo de configuração do DHCP* /etc/dhcp/dhcpd.conf (Aqui estão as configurações que ligam os IPs com seus respectivos MACs) /etc/init.d/isc-dhcp-server restart (Comando para reiniciar o serviço sem a necessidade de reiniciar o sistema todo) <strong><em>__________________________________________________________________________________________________________________________________________________</em></strong> *4) Clustal-mpi* *4a) README CLUSTAL-MPI* <strong>***************************</strong> <div id=":7n">************************************************<br /> <br /> CLUSTALW-MPI: ClustalW Analysis Using Grid and Parallel Computing<br /> <br /> based on ClustalW, the multiple sequence alignment program<br /> (version 1.82, Feb 2001)<br /> <br /> ******************************************************************************<br /> <br /> This README contains the help with INSTALLATION<br /> <br /> ClustalW is a popular tool for multiple sequence alignment. The<br /> alignment is achieved via three steps: pairwise alignment,<br /> guide-tree generation and progressive alignment. ClustalW-MPI is an<br /> MPI implementation of ClustalW. Based on<br /> version 1.82 of the original ClustalW, both the pairwise<br /> and progressive alignments are parallelized with MPI, a<br /> popular message passing programming standard.<br /> <br /> ClustalW-MPI is freely available to the user community.<br /> <br /> The software is available at<br /> <a target="_blank" href="http://www.bii.a-star.edu.sg/software/clustalw-mpi/">http://www.bii.a-star.edu.sg/software/clustalw-mpi/</a><br /> <br /> The original ClustalW/ClustalX can be found at <a target="_blank" href="ftp://ftp-igbmc.u-strasbg.fr/">ftp://ftp-igbmc.u-strasbg.fr</a>.<br /> <br /> Please send bug reports, comments etc. to "<a target="_blank" href="mailto:kuobin@bii.a-star.edu.sg">kuobin@bii.a-star.edu.sg</a>".<br /> <br /> INSTALLATION (for Unix/Linux)<br /> ------------<br /> <br /> This is an extremely quick installation guide.<br /> <br /> 1. Make sure you have MPICH or LAM installed on your system.<br /> <br /> 2. Unpack the package in any working directory:<br /> <br /> tar xvfp clustalw-mpi-0.1.tar.gz<br /> <br /> 3. Take a look at the Makefile and make the modifications that you might desire,<br /> in particular:<br /> <br /> CC = mpicc<br /> CFLAGS = -c -g<br /> <br /> or<br /> <br /> CFLAGS = -c -O3<br /> <br /> 4. Build the whole thing simply by typing "make".<br /> <br /> 5. If you wanted to use serial codes to compute the neighbor-joining tree,<br /> you would have to define the macro "SERIAL_NJTREE" when compiling trees.c:<br /> <br /> CFLAGS = -c -g -DSERIAL_NJTREE<br /> <br /> This macro is defined in the default Makefile. That is, to use<br /> MPI codes in neighbor-joining tree, you have to "undefine" the<br /> macro "SERIAL_NJTREE" in your Makefile.<br /> <br /> <br /> SAMPLE USAGE (for Unix/Linux)<br /> ------------<br /> <br /> 1. To make a full multiple sequence alignment:<br /> (using one master node and 4 computing nodes)<br /> <br /> %mpirun -np 5 ./clustalw-mpi -infile=dele.input<br /> %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input<br /> <br /> 2. To make a guide tree only:<br /> <br /> %mpirun -np 5 ./clustalw-mpi -infile=dele.input -newtree=dele.mytree<br /> %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input -newtree=CFTR.mytree<br /> <br /> 3. To make a multiple sequence alignment out of an existing<br /> tree:<br /> <br /> %mpirun -np 5 ./clustalw-mpi -infile=dele.input -usetree=dele.mytree<br /> %mpirun -np 5 ./clustalw-mpi -infile=CFTR.input -usetree=CFTR.mytree<br /> <br /> 4. The environment variable, CLUSTALG_PARALLEL_PDIFF, could be used to<br /> run the progressive alignment based on the parallelized pdiff().<br /> <br /> By default the variable CLUSTALG_PARALLEL_PDIFF is not set, and<br /> the progressive alignment will be parallelized accroding the structure<br /> of the neighbor-joining tree. However, parallelized pdiff() will still<br /> be used in the later stage when prfalign() tries to align more distant<br /> sequences to the profiles. If you don't understand this,<br /> simply leave the variable unset.<br /> <br /> KNOWN PROBLEM<br /> ------------<br /> 1. On Intel IA32 platforms, slightly different neighbor-joining trees<br /> might be obtained with and without enabling the compiler's optimization flags.<br /> <br /> This is due to the fact that Intel processors use 80-bit FPU registers<br /> to cache "double" variables, which are supposed to be 64-bit long. With '-O1'<br /> or above optimizer flag, the compiler would not always immediately save the<br /> variables involved in a double operation back to memory. Instead, intermediate<br /> results will be saved in registers, having 80-bit of precision. This would<br /> cause problem for nj_tree() because it is sensitive to the precision of floating<br /> point numbers.<br /> <br /> Solutions:<br /> <br /> (1) Other platforms, including Intel's IA64, don't seem to have this problem.<br /> <br /> or<br /> <br /> (2) Building "trees.c" with optios like the below: (potentially<br /> with high performance overhead)<br /> <br /> %gcc -c -O3 -ffloat-store trees.c // GNU gcc<br /> <br /> %icc -c -O3 -mp trees.c // Intel C compiler<br /> <br /> or<br /> <br /> (3) Decalring relevant variables as "volatile" in nj_tree():<br /> <br /> volatile double diq, djq, dij, d2r, dr, dio, djo, da;<br /> volatile double *rdiq;<br /> <br /> rdiq = (volatile double *)malloc(((last_seq-first_seq+1)+1)*<br /> sizeof(volatile double));<br /> ...<br /> ...<br /> free((void*)rdiq); <div id=":80"><img alt="" src="https://mail.google.com/mail/images/cleardot.gif" /></div> </div> <table cellpadding="0" id=":84" border="0"> <tbody><tr><td> </td><td> </td><td> </td><td> </td><td> </td></tr></tbody> </table> *4b) Manual de uso extraído de: <a target="_blank" href="http://www.cuhk.edu.hk/itsc/compenv/research-computing/organon/cwmpi.html#ii">http://www.cuhk.edu.hk/itsc/compenv/research-computing/organon/cwmpi.html#ii</a><br /> <br /> OBS: o clustalw-mpi já está instalado em todo o cluster.<br /> <br /> <br /> 1. Introduction of ClustalW-MPI* <br /> <br /> ClustalW is a general-purpose multiple sequence alignment program for DNA or proteins.<br /> <br /> The alignment is achieved via three steps: * pairwise alignment; * guide-tree generation; and * progressive alignment. ClustalW-MPI is an MPI and GRID-aware implementation of ClustalW. Based on version 1.82 of the original ClustalW, both the pairwise and progressive alignments are parallelized with MPI, a popular message passing programming standard.<br /> <br /> <a name="13547f7f5484941a_ii"></a> *2. Input Sequences* <br /> <br /> p All sequences input must be in 1 file, one after another. 7 formats are automatically recognised: NBRF/PIR, EMBL/SWISSPROT, Pearson (Fasta), Clustal ( *.aln), GCG/MSF (Pileup), GCG9/RSF and GDE flat file. All non-alphabetic characters (spaces, digits, punctuation marks) are ignored except "-" which is used to indicate a GAP ("." in GCG/MSF).<br /> <br /> p If the input file is in GenBank (*.gb) or other formats which is not supported by the Clustal W and can't be converted by Clustal X, you can used EMBOSS-3.0($ seqret -osformat fasta) and convert the file to Fasta ( *.fasta) format beforehand.<br /> <br /> <a name="13547f7f5484941a_iii"></a> *3. Job Submission and Monitoring* <br /> <br /> <u>Sample PBS Script for a 4-node job - "clustalw.pbs"</u><br /> You can use the following script to build a full multiple sequence alignment job using 4 computing nodes (8 CPUs): To edit the script, you may run pico, e.g. % pico clustalw.pbs<br /> <br /> | <p>#!/bin/sh<br /> #PBS -q q4n16g<br /> #PBS -N cpu8<br /> #PBS -lnodes=4:ppn=2<br /> #PBS -m bea<br /> #PBS -M <a target="_blank" href="mailto:s800000@organon.itsc.cuhk.edu.hk">s800000@organon.itsc.cuhk.edu.hk</a><br /> #<br /> export PATH=/usr/pbs/bin:$PATH;<br /> cd clustalw-mpi-0.13<br /> source /usr/local/etc/mpich.sh<br /> time pbs_mpirun clustalw-mpi -infile=test.fasta >& test8.out</p> | <br /> <br /> <u>Job Submission</u><br /> Then, you can run the job by submitting the script to PBS as follows:<br /> _% qsub clustalw.pbs_ <br /> <br /> For other PBS commands, please refer to the section "Starter Guide for PBS" <u>Job Monitoring</u> List the current jobs on the cluster<br /> <em>%qstat<br /> <br /> </em>List the currently running jobs on the cluster<br /> %<em>qstat –a<br /> </em><br /> Lists nodes allocated to running jobs<em> <br /> %qstat –r</em> Show detailed information on a specific job<br /> % _qstat –n_ Show detailed information for all queues<em><br /> %qstat -f <jobid></em> Advance PBS command: Advanced users can type _% man q_command_ to see the details of the following command. <font color="#888888"> </font><font color="#888888"> </font><font color="#888888"> </font> | <p align="center"> *Command* </p> | <p align="center"> *Function* </p> | | <p align="center"> _qalter_ </p> | <p> Alter a job's attributes.</p> | | <p align="center"> _qdel_ </p> | <p> Delete a job.</p> | | <p align="center"> _qhold_ </p> | <p> Place a hold on a job to keep it from being scheduled for running.</p> | | <p align="center"> _qmove_ </p> | <p> Move a job to a different queue or server</p> | | <p align="center"> _qmsg_ </p> | <p> Append a message to the output of an executing job.</p> | | <p align="center"> _qrerun_ </p> | <p> Terminate an executing job and return it to a queue.</p> | | <p align="center"> _qrls_ </p> | <p> Remove a hold from a job.</p> | | <p align="center"> _qselect_ </p> | <p> Obtain a list of jobs that met certain criteria.</p> | | <p align="center"> _qsig_ </p> | <p> Send a signal to an executing job.</p><font color="#888888"> </font> | <div id=":6e"><img alt="" src="https://mail.google.com/mail/images/cleardot.gif" /> <strong><em>__________________________________________________________________________________________________________________________________________________</em></strong> *5) Instalar Mega no Debian squeeze* 5.1) Adicionar #deb <a target="_blank" href="http://ubuntu.mirror.cambrium.nl/ubuntu/">http://ubuntu.mirror.cambrium.nl/ubuntu/</a> natty main universe no sources.list 5.2) Instalar o wine1.2: aptitude install wine 1.2 5.3) Baixar wine1.2-gecko do mirror: <a target="_blank" href="http://mirror.pnl.gov/ubuntu//pool/multiverse/w/wine1.2-gecko/wine1.2-gecko_1.0.0+1_i386.deb">mirror.pnl.gov/ubuntu/</a> 5.4) Instalar o wine1.2-gecko usando Gdebi 5.5) Adicionar o repositório do mega "deb http://update.megasoftware.net/deb/ mega main" no sources.list 5.6) Instalar o mega: aptitude install mega Rodrigo. </div> <strong><em>____________________________________________________________________________________________________________________________________________________</em></strong> *6) MPICH2 - Debian e RedHat* Copiado de http://www.flaviotorres.com.br/fnt/artigos/mpich2.php Por: Flavio Torres - ftorres[@]ymail.com <br />Publicado em: 03/08/2007 --- Instalando MPICH2 em Sistemas Linux. MPICH é uma das implementações existentes para o padrão MPI (Message-Passing Interface) de bibliotecas de passagem de mensagem. Além da biblioteca MPI, MPICH contém um ambiente de programaçao que inclui um conjunto de bibliotecas para análise de performance (profiling) de programas MPI e uma interface gráfica para todas as ferramentas. Em outras palavras, com o MPI é possível você ter um único processo sendo executado em múltiplos servidores, um Cluster. Este é um segundo artigo, para aqueles que não possuem um ubuntu dapper, ou brigaram muito com o python 2.3 :) Arquivos necessários: * gcc<br />* cpp<br />* libc6<br />* lib6c-dev<br />* g77<br />* g++<br />* Python 2.2 ou superior Instalando os arquivos necessários: apt-get install gcc cpp libc6 libc6-dev g77 g++ Python, em 99% das instalações já vem na versão 2.4 ou 2.5. Obtendo o tarball do mpich2, site do projeto: <a target="_blank" href="http://www-unix.mcs.anl.gov/mpi/mpich/%20">http://www-unix.mcs.anl.gov/mpi/mpich/</a> wget http://www-unix.mcs.anl.gov/mpi/mpich/downloads/mpich2-1.0.5p4.tar.gz Descompacte o arquivo dentro de seu home: tar -xvzf mpich2-1.0.5p4.tar.gz ; cd mpich2-1.0.5p4 Compile e instale: ./configure -prefix=/home/you/mpich2-install |& tee configure.log<br />make |& tee make.log<br />make install |& tee install.log Se não utilizar um prefix, o default será /usr/local/bin. Adicione o local de instalação em seu $PATH; Para csh e tcsh: setenv PATH /home/you/mpich2-install/bin:$PATH Para Bash e sh: export PATH=/home/you/mpich2-install/bin:$PATH Checando se está tudo em ordem: which mpd<br />which mpiexec<br />which mpirun O which deve te retornar o local de instalação dos executáveis. Após instalar em *todos* os hosts, devemos configurar os nomes para a resolução, edite o seu /etc/hosts e configure todas as máquinas: vi /etc/hosts<br />192.168.0.1 host1<br />192.168.0.5 host2<br />192.168.0.2 host3 Agora, devemos configurar o ssh para conexão sem senha, entre *todos os hosts*. Gerando a chave, lembre-se de fazer em todos os servidores: ssh-keygen -t dsa -b 1024<br />* Não digite a senha quando for questionado, tecle <enter> Agora devemos configurar o ssh para autenticar sem senha: Passo1) Copiando a chave do host1 para o host2 e host3: host1$ scp .ssh/id_dsa.pub usuario@host2:<br />host1$ scp .ssh/id_dsa.pub usuario@host3: Passo2) Configurando a chave para autenticação no host2: host2$ cat id_dsa.pub >> .ssh/authorized_keys<br />host2$ chown 600 .ssh/authorized_keys Passo3) Configurando a chave para autenticação no host3: host3$ cat id_dsa.pub >> .ssh/authorized_keys<br />host3$ chown 600 .ssh/authorized_keys Agora, repita os 3 Passos para as 3 máquinas, no final de tudo, você deve realizar ssh sem senha da máquina: host1 > host2 e host3<br />host2 > host1 e host3<br />host3 > host2 e host1 Configurando os arquivos do mpi que são: * mpd.conf<br />* mpd.hosts O arquivo mpd.conf contém a informação de autenticação do mpi entre as máquinas, por isto,a senha deve ser a MESMA em todos os hosts. Se você está utilizando o root para realizar os testes, este arquivo deve estar dentro de /etc, caso esteja utilizando algum usuário comum, este arquivo deve estar dentro de $HOME. Adicionando a senha do mpd.conf e copiando para as outras máquinas: host1$ echo "MPD_SECRETWORD=mr45-j9z" > .mpd.conf<br />host1$ chmod 600 .mpd.conf<br />host1$ scp .mpd.conf host2:<br />host1$ scp .mpd.conf host3: O arquivo _mpd.hosts_ contém as máquinas que fazem parte do cluster para ESTE usuário; Adicionando as máquinas do cluster no arquivo .mpd.hosts, e replicando para as outras máquinas: host1$ echo "host1" > .mpd.hosts<br />host1$ echo "host2" >> .mpd.hosts<br />host1$ echo "host3" >> .mpd.hosts<br />host1$ scp .mpd.hosts host2:<br />host1$ scp .mpd.hosts host3: Pronto, a parte chata já passou :) Agora vamos iniciar o daemon do mpi, com o mpdboot: host1$ mpdboot -n 3 -f .mpd.hosts<br />host1$ mpdtrace<br />host1<br />host2<br />host3 Perfeito, estão todos respondendo!! Agora é só brincar com um teste simples: host1$ mpiexec -n 5 mpich2-1.0.5p4/examples/cpi<br />Process 0 of 5 is on host1<br />Process 2 of 5 is on host2<br />Process 1 of 5 is on host3<br />Process 4 of 5 is on host3<br />Process 3 of 5 is on host1<br />pi is approximately 3.1415926544231230, Error is 0.0000000008333298 wall clock time = 0.925560 <strong><em>____________________________________________________________________________________________________________________________________________________________________________________________</em></strong> <div id=":y4"> ==7) mrbayes-multi== ---++ Fonte: http://nebc.nerc.ac.uk/bioinformatics/docs/mrbayes-multi.html | Name | mrbayes-multi | | Description | <p> *MrBayes* is a program for the Bayesian estimation of phylogeny. *mrbayes-multi* provides a multi-processor version of [[http://nebc.nerc.ac.uk/bioinformatics/docs/mrbayes.html][MrBayes]].</p><p>In order to run the parallel version of Mr Bayes (across multiple processors) you must have mpi and installed, and MrBayes must be launched using the mpirun command (i.e. it cannot be executed in the standard interactive mode). Finally you will need to have mpd configured on your system.</p><p>To achieve the above, you can run commands similar to the following:</p>echo "MPD_SECRETWORD=secret" <br />~/.mpd.conf <br />chmod 600 ~/.mpd.conf <br />mpd & <br />mpirun -np 4 mrbayes-multi<p>In the first command above, you will want to change the text "secret" to a password only you know. In the final command, change the number 4 to the number of cores you wish to run mrbayes over.</p><p>Bayesian inference of phylogeny is based upon a quantity called the posterior probability distribution of trees, which is the probability of a tree conditioned on the observations. The conditioning is accomplished using Bayes's theorem. The posterior probability distribution of trees is impossible to calculate analytically; instead, MrBayes uses a simulation technique called Markov chain Monte Carlo (or MCMC) to approximate the posterior probabilities of trees.</p> | | Homepage | | | Remote Documentation | http://mrbayes.csit.fsu.edu/wiki/index.php/Main_Page | | Local Documentation | <table style="padding: 2px; color: black; font-size: 9pt" border="0"><tbody><tr><td style="border: thin solid #dddddd; padding: 2px; vertical-align: top; width: 200px; background-image: url('http://nebc.nerc.ac.uk/pathogens/images/bg2.gif')"> </td></tr></tbody></table> | </div> <strong><em>__________________________________________________________________________________________________________________________________________________</em></strong> -- Main.RodrigoZucoloto - 02 Nov 2011
Edit
|
Attach
|
P
rint version
|
H
istory
:
r14
<
r13
<
r12
<
r11
<
r10
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r11 - 05 Apr 2012 - 11:51:48 -
RodrigoZucoloto
LabioComp
Log In
or
Register
LabioComp Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Webs
Abacos
Acbahia
AnpedGT16
ArcoDigital
Argumento
Avsan
CalculoB
Ceb
Cetad
CetadObserva
Cibercultura
Ciberfem
CiberParque
ColoquioCiags
Coloquiofasa
ConexoesSaberes
Cpdteste
Cppd
Creche
Cridi
Da
DACN
DCE
DelzaTeste
DeniseCarla
DepHistoria
DicionarioBelasartes
Ecologia
EDC
Educandow
EduMus
EleicoesReitor2010
Encima
Enearte
Estruturas
EstruturasEng
FACED
FAT
FepFaced
GEC
GeneticaBiodiversidade
GeneticaBiodiversidade3
GeneticaBiodiversidade
Gepindi
GetecEng
Godofredofilho
GrupoAlgebra
ICI010
Informev
Ites
LabioComp
LEG
Lepeja
Letras
LivroLivreSalvador
Main
MaisUm
Mata07
Mefes
MefesCpd
MetaReciclagem
Neclif
NelsonPretto
Nuclear
Numcad
Nutricao
Observa
OrfaosdeRua
PauloCostaLima
PdI
PescandoLetras
PETFilosofia
Pgif
PGNUT
PortalPpga
PosCultura
Pospetroigeo
PPGAC
PPGE
PpggBio
Ppggenbio
Pretto
Proad
PROGESP
ProjetoLencois
Quimica
RadioFACED
RadioTeatro
RadioWeb
Riosymposium10
Ripe
Salasdoctai
Sat
Sedu
SemBio
SeminarioPibid
SimoneLucena
Sociologia
SSL
Tabuleiro
TabuleirosUfba
TCinema
TerritoriosDigitais
TWiki
Twikidea
UFBAIrece
UniversidadeNova
VizinhoEstrangeiro
XIISNHCT
Български
English
Español
日本語
Português
Copyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback