Wednesday, December 9, 2009

How can we integrate with HP CMDB - part 3

Here are some code to interact with HP SM7 Web Services. It uses an AXIS library generated from the WSDL file, available at iTunes or the following link: HP SM7 Axis Library.

ci = new Packages.com.hp.schemas.SM._7.Common.StringType (work.getString ("NAME"));

keys = new Packages.com.hp.schemas.SM._7.DeviceKeysType ();
keys.setConfigurationItem (ci);

instance = new Packages.com.hp.schemas.SM._7.DeviceInstanceType ();
status = new Packages.com.hp.schemas.SM._7.Common.StringType ("In use");
instance.setStatus (status);
computer = new Packages.com.hp.schemas.SM._7.Common.StringType ("computer");
instance.setConfigurationItemType(computer);
application = new Packages.com.hp.schemas.SM._7.Common.StringType ("application");
instance.setAssignmentGroup (application);

messageType = new Packages.com.hp.schemas.SM._7.Common.MessageType();
messageTypes = [messageType];

type = new Packages.com.hp.schemas.SM._7.DeviceModelType (keys, instance, messageTypes, "query");
booleanf = new java.lang.Boolean (false);
booleant = new java.lang.Boolean (true);

request = new Packages.com.hp.schemas.SM._7.CreateDeviceRequest (type, booleanf, booleanf, booleant);

task.logmsg ("request: " + request);

locator = new Packages.com.hp.schemas.SM._7.Device_ServiceLocator ();

device = locator.getDevice ();

device.setUsername ("falcon");

response = device.createDevice (request);

task.logmsg ("response: " + response + " message: " + response.getMessage());
messages = response.getMessages ();
for (message in messages) {
    task.logmsg ("messages: " + message);
}

Wednesday, December 2, 2009

How can we integrate with HP CMDB? - Part 2

 Nothing like having the opportunity to touch an HP Service Management 7 installation and see first hand what's possible and what's not.

The bottom line is that, like any 'modern' application, HP SM7 makes available Web Service for interacting with the module, like Change Management and Configuration Items.

So, forget what I said before, because it's possible to load data directly into HP SM7 using the Web Service interface.

I didn't find a way to reconcile the data coming from multiple data sources, but at least the parable of not being able to load data into HP SM7 is not true.

Thursday, November 19, 2009

How can we integrate with HP CMDB?

HP CMDB (HP CMDB link) is HP's solution for the Configuration Management System (CMS).It's a re-brand of the Mercury uCMDB product. As with any CMDB, it has the ability to import or federate data.

Although not clearly stated, it seems that the only provider of imported data is HP Discovery and Dependency Manager (HP DDM), another Mercury product.

So the only remaining alternative is to federate data, which represents a real compelling solution, as it doesn't require to ETL (extract, transform and load) the data.

The issue arrives when multiple data sources (including HP DDM) are referring to the same resource. Although not clearly stated, HP CMDB doesn't seem to provide a way to reconcile the data, but it defers this job to HP SM7 (HP SM7), another suite of products that can consume HP CMDB data.

Bottom line: although not state clearly, there seems to be no way to load data into HP CMDB, neither there is a way to reconcile federated data.

Hard to believe? Tell me if you know anything different...

Monday, November 16, 2009

How to delete all CCMDB Authorized CI classification

Here is the script to delete all CCMDB Authorized CI classification:

delete from select classstructure where classstructureid in (select classstructureid from classusewith where objectname = 'CI')

Wills, do it at your own risk.

Wednesday, September 30, 2009

How to make TPM 5.1.1 configure a Ubuntu machine properly

As you know, TPM 5.1.1 can deploy RedHat and SuSE. What about the popular distro Ubuntu? The answer is: the TPM 5.1.1 workflow, as the hostname is not properly changed in Ubuntu.

To fix that, you just have to add the following line to the script /opt/ibm/tivoli/tpm/repository/vmware-vi3-scripts/config_linux.sh:


#Password setup
echo "${rootpassword}" | passwd --stdin root


# EP start
echo ${hostname} > /etc/hostname


# EP end


# Shutdown
shutdown -h now

With that, TPM 5.1.1 can successfully define the hostname in Ubuntu.

Friday, September 25, 2009

Creating Extended Attributes with the TDI IdML Connector


As you know, the IdML Connector can be used to create (guess what...) an IdML!

The tags in the IdML comply to the Common Data Model, and when they are written to the XML file, they are preceded with the "cdm:".

TADDM has the concept of Extended Attribute. The question is: how to specify an extended attributes using the IDML Connector?

The answer is simple, Vince: In the Attribute Connector area, create an attribute whose name starts with "extattr:". The IDML Connector treats these attributes differently: it doesn't try to find a corresponding attribute in the Common Data Model, but simply copies them to the XML file.

Go, have fun with the IDML Connector!

Monday, April 27, 2009

Federation and TPAE

Introduction


This document describes how the Tivoli Process Automation Engine (TPAE) can be configured to use a federated data source.






Assumptions:


This document assumes that DB2 Enterprise Edition Server is being used as a backend database for the TPAE.

Configuration:


Run the following command to enable the federation in a DB2 instance:

DB2 update dbm cfg using FEDERATED YES


Scenario

Assume there is a DB2 database containing information about Computer System, to be federated with the Authorized CI space in the Change and Configuration Management Database (CCMDB). Here is the DB2 table definition:

COMPUTER_OWNER table





























Column Name


Column Type


Length

COMPUTER_OWNER_ID
INTEGER


COMPUTER_NAME


VARCHAR


50


OWNER


VARCHAR


50



with the following contents:

COMPUTER_OWNERID
COMPUTER_NAME
OWNER
16
bush.my.com
Dan Quayle
18
bushw.my.com
Dick Chenney
14
carter.my.com
Walter Mondale
17
clinton.my.comAl Gore
5
coolidge.my.com
Charles Dawes
9
eisenhower.my.comRichard Nixon
13
ford.my.comNelson Rockfeller
7
froosevelt.my.comJohn Garner
4
harding.my.comCalvin Colidge
6
hoover.my.comCharles Curtis
11
johnson.my.com
Hubert Humphrey
10
kennedy.my.comLyndon Johnson
12
nixon.my.comSpiro Agnew
19
obama.my.com
Joe Biden
15
reagan.my.comGeorge Bush
2
taft.my.com
James Sherman
1
troosevelt.my.com
Charles Fairbanks
8
truman.my.com
Alben Barkley
3
wilson.my.comThomas Marshall




Configuring TPAE

Creating New Database Object


Although the COMPUTER_OWNER table will be federated from an external database, in order to use this table in TPAE, it needs to be defined using the Database Configuration application (under System Configuration -> Platform Configuration).

  • In the Database Configuration application, click New Object:



  • Type COMPUTER_OWNER in the Object field, and Computer Owner in the Description:


  • In the Attributes tab, add the attributes COMPUTER_NAME and OWNER and delete the attribute DESCRIPTION:



  • Save the new Database Object.

Applying the Database Change


  • Make a backup of the TPAE database
  • Go back to the List tab
  • In the Select Action pull down menu, choose Manage Admin Mode
  • Click Turn Admin Mode ON

  • Click the Refresh Status button until the Admin Mode is set to ON:


  • Close the Turn Admin Mode ON dialog
  • In the Select Action pull down menu, click Apply Configuration Changes
  • Select the Do you have a current backup? checkbox
  • Click the Start Configuring the Database button:

  • Click the Refresh Status button until the End ConfigDB message is displayed:


  • Close the Structural Database Configuration dialog.

Creating the Relationship

  • Select the CI Object:

  • Click the Relationships tab
  • Add the new Relationship, specifying the following information:
    • Relationship: COMPUTER_OWNER
    • Child Object: COMPUTER_OWNER
    • Where Clause: computer_name = :cinum



  • Save the new Relationship
  • Go back to the List tab

Modifying TPAE GUI to show the federated information


In this step, we'll add the Computer Owner to the TPAE GUI.

  • Go to Application Designed (under System Configuration -> Platform Configuration).
  • Select the CI application:


  • Click the Configuration Item tab
  • Select the Organization attribute:


  • Right-click the CI Location attribute and click Copy
  • Click the section containing the CI Location attribute:
  • Right click the section and select Paste. A copy of the Organization attribute is put in the section.
  • Right-click the new Organization field and select Properties
  • In the Attribute field, type COMPUTER_OWNER.OWNER
  • Define a value in the Label field:


  • Save your work in the Application Designer



Configuring Database Nickname


So far, the database still contains a table (created through the Database Configuration application). We'll redefine it as a nickname to the table in another database.

Registering the Federated Database in the TPAE Database Server


If the federated database is in another database instance, the database needs to be registered in the TPAE database server.
  • Run the following command to register the node:
    db2 catalog tcpip node federate remote <remote_db_server> server <remote_db_port>
  • Run the following command to register the instance:
    db2 catalog db <remote_db> at node federate

Creating the Database Nickname


  • In the DB2 Control Center, expand the options under MAXDB71 and select Tables
  • The application shows all tables. Right-click the table COMPUTER_OWNER and select Drop

  • In the left side, under MAXDB71, right-click Nicknames and select Create..
  • In the Introduction screen, click Next
  • In the Specify the data source and the wrapper, select DB2.
  • Click the Create... button in the Wrappers area.
  • Click OK in the Create Wrapper dialog.
  • Click Next.
  • In the Specify the server definition for the data source, click the Create... button
  • In the Create Server Definitions dialog, click the Discover.. button
  • Select the federated database and define the Type and Version:
  • Click Properties...
  • In the Server Definition Properties dialog, type the User ID and Password:
  • Click OK to close the Server Definition Properties
  • Click OK to close the Create Server Definitions
  • In the Create Nicknames dialog, click Next twice
  • In the Define Nicknames windows, click Discover...
  • In the Discover dialog, filter by Schema name
  • Select the table and click Properties...
  • In the Nickname schema, select MAXIMO and click OK
  • Click Finish to close the Create Nicknames dialog.


Create User Mappings

  • Expand the Federated Database Objects
  • Expand DRDA
  • Expand Server Definitions
  • Expand the Server name
  • Select User Mappings

  • Click Create New User Mapping
  • In the Create User Mapping dialog, select the MAXIMO user:
  • On the Settings tab, specify the remote user and password and click OK:

Viewing the Federated Data


After the nickname has being configured, the federated data can be seen in the Authorized CI application:

Using the deltaEngine.js to determine changes in TDI


Introduction




This document describes how to use the JavaScript code deltaEngine.js to determine changes in the data source.



This document extends the tutorial Creating IDML Books using Tivoli Directory Integrator (DLA Tutorial), describing how to create a Discovery Library Adapter. Certainly, the deltaEngine can be used in any Assembly Line, and the use in the DLA process is just an example.




Configuration of deltaEngine.js




The component deltaEngine.js can be obtained from Lotus Quickr Place. Copy this file to your TDI solution directory. Then follow these steps to configure it:





  • Right-click Scripts and click New Script













  • In the Input Text, give the Script a name and click OK:










  • Click the Config... tab, select the Implicitly Included checkbox and the add the deltaEngine.js file:











  • Click the Properties and add a new Property file:









  • In the Connector Configuration tab, specify a name to the Property file:





  • In the Property Stores list, move the Derby-Properties to the top of the list:





  • In the Editor tab, define the properties below, adjusting the com.ibm.di.store.database property according to your TDI solution directory:















Configuration of the System Store




Before we can use the TDI System Store, we need to start it. Follow these steps to start it:





  • Click Store -> Network Server Settings:











  • Click Start:








Using the deltaEngine Script


This section describes the procedure to use the deltaEngine in an Assembly Line:





  • In a suitable spot in your Assemby Line, add a Script










  • In the Input Text dialog, give it a name and click OK:





  • Type the following script in the CalculateDelta:





deltaEntry = deltaEngine.computeDelta (work, "machine");

task.logmsg ("deltaEntry: " + deltaEntry.getOperation ());










  • Run your Assembly Line. The first time you run the Assembly Line, it shows the deltaEntry operation as add, indicating the records should be added. The subsequent runs show the operation as generic, indicating there was no change to the entry:

11:54:20 11:54:20 @@Old snapshot: [machine:troosevelt.my.com] 11:54:20 @@Commiting snapshot changes... 11:54:20 @@finished 11:54:20 deltaEntry: generic 11:54:20

  • Assuming we want to skip the entries that have no change, add the following code to CalculateDelta script:

    deltaEntry = deltaEngine.computeDelta (work, "machine");

    task.logmsg ("deltaEntry: " + deltaEntry.getOperation ());

    if (!deltaEntry.getOperation().equals ("add")) { task.logmsg ("Skipping entry: " + work.getString ("machine")); system.skipEntry (); }
  • Now, the Assemby Line will skip the records that are not new to the data source.

Conclusion


This tutorial showed how to use the deltaEngine.js to determine changes in the data source. With a few steps, it's possible to leverage the internal TDI System Store to store a snapshot of the data source and skip records that have been processed.

Sunday, April 19, 2009

CA Spectrum Discovery Library Adapter

In the old days, a Discovery Library Adapter was described as a Java component that exports the data from a certain product as an XML file that complies to the IDML XML schema format.

Well, since then a DLA has been described as a process to export data into this XML schema, without mentioning Java component, as the use of data integration tools, like Tivoli Directory Integrator, proved more effective to create an IDML file.

My goal is expand the mechanism to create DLA, and I always dreamed to create a DLA in perl. Well, here is the perfect opportunity:

The CA Spectrum DLA

This DLA is very simple and can be run in any machine

#!/usr/bin/perl
print "Go, talk to your CA representative and demand a change to CA's license agreement\n"

Done!!

Here is the deal: according the CA license agreement, CA owns the data it collects with its product, so other products can't retrieve it. So, it's illegal to write a DLA for CA. How stupid that it!!

Imagine if IBM Information Management group crafts the following statement: "The data stored in an IBM DB2 relational database belongs to IBM and can't be extract for the purpose of moving to another database."

Imagine if EMC puts a similar statement, saying: "The data stored in an EMC storage device belongs to EMC and can't be used to move data outside an EMC device."

The world would be unmanageable! As a customer, if I buy a certain product, I would expect that I can use the data collected in any way I want. Now with CA...

The solution: get rid of these CA. Your life will be better without them.

Monday, March 16, 2009

How to determine which change is using a certain workflow

When there is an active instance using a certain workflow process, it can't be disabled. To see which work order is using a certain workflow, go to the Workflow Administration application. It lists al active process and allows to stop them.

Sunday, March 15, 2009

Comparing TADDM performance on different platforms

Introduction


This document compares the running time of running discovery in TADDM 7.1.2 on a virtual environment (VMware ESX) and a non-virtual environment. This is not an comprehensive test, and should not take as TADDM performance result, but just as a comparison study of the TADDM Server in different platforms.

Platforms

Virtual environment


Linux RedHat Enterprise Server Release 5
2 vCPU
3600 MB
VMware ESX, with just 1 VM running

Non-virtual Windows environment


Windows 2003 SP2
2 CPUs
3200 MB
TADDM 7.1.2 and DB2 collocated in one server

Non-virtual Linux environment


Linux RedHat Enterprise Server Release 5
2 CPUs
3200 MB
TADDM 7.1.2 and DB2 collocated in one server

Two server, non-virtual environment


TADDM Server in the Windows environment above
DB2 Server in a separate machine, same specification as the server above


Target environment


37 Computer Systems, consisting of Windows, AIX, HP-UX, Solaris and Linux machines.
Number of components found: 315 (3 Server Equivalents)


Running time


Running time
Virtual environment
Windows environment
Linux environment
Two-server environment
Level 3 Discovery
30 min 47 sec
8 min 49 sec
28 min 37 sec
6 min 4 sec
Level 3 Discovery without WebSphereCellSensor
10 min 36 sec
7 min 43 sec
6 min 16 sec
5 min 8 sec
Level 2 Discovery
10 min 15 sec
5 min 6 sec
7 min 34 sec
4 min 4 sec
100 MB IDML bulk load (250,000 CIs)
13 hours 9 min
5 hours 52 min
13 hours 55 min
22 hours 19 min
Level 3 Rediscovery after bulk load
30 min 4 sec
6 min 47 sec
16 min 25 sec
6 min 24 sec


Conclusions


  • Even in a small scale, the discovery process in a Virtual environment is almost 4 times slower than in a non-virtual environment
  • Although the run time for other activities don't show significant difference, the discovery process in a Virtual environment is just prohibitive.
  • Except for bulk loading, a two-server environment had the best performance in all tests.

Bringing Microsoft Active Directory manager information into Tivoli Process Automation Engine


  • Go to TPAE, DB Config, select PERSON table and add the following attributes
    • supervisor_dn as aln(511);

    • person_dn as aln(511).

  • Still in the DB Config, define the following relationship in the PERSON table:
  • Go to TPAE, Cron Task Setup, select LDAPSYNC and create a user mapping like the following:

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE ldapsync SYSTEM "ldapuser.dtd">
<ldapsync>
<user>
<basedn>...</basedn>
<filter>(objectClass=user) </filter>
<scope>subtree</scope>
<attributes>
<attribute>sAMAccountName</attribute>
<attribute>givenName</attribute>
<attribute>displayName</attribute>
<attribute>memberOf</attribute>
<attribute>sn</attribute>
<attribute>manager</attribute>
<attribute>distinguishedName</attribute>
</attributes>
<datamap>
<table name="MAXUSER">
<keycolumn name="USERID" type="UPPER">sAMAccountName</keycolumn>
<column name="LOGINID" type="ALN">sAMAccountName</column>
<column name="PERSONID" type="UPPER">sAMAccountName</column>
<column name="STATUS" type="UPPER">{ACTIVE}</column>
<column name="TYPE" type="UPPER">{PRIMARY}</column>
<column name="QUERYWITHSITE" type="YORN">{1}</column>
<column name="FORCEEXPIRATION" type="YORN">{0}</column>
<column name="FAILEDLOGINS" type="YORN">{0}</column>
<column name="PASSWORD" type="CRYPTO">{0}</column>
<column name="MAXUSERID" type="INTEGER">{:uniqueid}</column>
<column name="SYSUSER" type="YORN">{0}</column>
<column name="INACTIVESITES" type="YORN">{0}</column>
<column name="SCREENREADER" type="YORN">{0}</column>
</table>
<table name="PERSON">
<keycolumn name="PERSONID" type="UPPER">sAMAccountName</keycolumn>
<column name="FIRSTNAME" type="ALN">givenName</column>
<column name="LASTNAME" type="ALN">sn</column>
<column name="STATUS" type="UPPER">{ACTIVE}</column>
<column name="TRANSEMAILELECTION" type="UPPER">{NEVER}</column>
<column name="STATUSDATE" type="ALN">{:sysdate}</column>
<column name="ACCEPTINGWFMAIL" type="YORN">{1}</column>
<column name="LOCTOSERVREQ" type="YORN">{1}</column>
<column name="PERSONUID" type="INTEGER">{:uniqueid}</column>
<column name="HASLD" type="YORN">{0}</column>
<column name="LANGCODE" type="UPPER">{en}</column>
<column name="PERSON_DN" type="UPPER">distinguishedName</column>
<column name="SUPERVISOR_DN" type="UPPER">manager</column>
</table>
</datamap>
</user>
</ldapsync>

  • Go to Action application and define the following Action:

  • Go to Escalation application and define the following Escalation:

Monday, March 2, 2009

DB2 on Rails

I'm starting again with the IBM_DB adapter gem, available at: http://rubyforge.org/projects/rubyibm/

  • gem install ibm_db
  • rails newapp
  • cd newapp
  • ruby script/console
  • >> gem 'ibm_db'
  • ruby script/generate model person
  • in the file db/migrate/*_create_person.rb, define the following:
class CreatePeople < ActiveRecord::Migration
def self.up
create_table :people do |t|
t.column :firstname, :string
t.column :lastname, :string
t.column :phone, :stringgg
end
end

def self.down
drop_table :people
end
end

  • rake db:migrate

Saturday, January 31, 2009

DB2 on Rails

It's time to test running Rails with DB2 as the backend.

I downloaded the information from http://www.alphaworks.ibm.com/tech/db2onrails.

I am running on Linux, so I did:

  • export DB2DIR=/opt/IBM/db2/V9.1_01
  • export DB2LIB=/opt/IBM/db2/V9.1_01/lib32
then:

  • cd Source
  • rake
It failed with the following message:

checking for SQLConnect() in -ldb2... no
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.

Looking at the mkmf.log, I see:

conftest.c:3: error: ‘SQLConnect’ undeclared (first use in this function)
conftest.c:3: error: (Each undeclared identifier is reported only once
conftest.c:3: error: for each function it appears in.)

  • It seems the environment variables were not set properly.
  • I ran it again and got the following message:
Database environment is not set up
Run ". /home/db2inst1/sqllib/db2profile" and retry


After sourcing the db2profile, I ran with the following messages:

Loaded suite tests
Started
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
Finished in 0.593455 seconds.

But I seems the shared library has been generated successfully.

  • cp ibm_db2.so /usr/local/lib/ruby/site_ruby/1.8/i686-linux/
Installing the Adapter

  • cp ibm_db2_adapter.rb /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/connection_adapters/
  • vi /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record.rb

Well, after trying that without success, I found the following response in the forum:

Given that RAILS_CONNECTION_ADAPTERS went away, the IBM gem is no longer called "ibm_db2", and that the minimum software requirements have changed, I recommend skipping the "Toolkit for DB2 on Rails" and to start learning about the "ibm_db" gem here: http://rubyforge.org/projects/rubyibm/

N.B. The unwarranted license of DB2 Express-C is currently limited to using 2 processor cores & 2 GB of memory. However, it's still a better option than "Oracle XE".

Starting from scratch...

Monday, January 12, 2009

Ruby and DB2

The next step is my Ruby adventure is to use DB2 as the data source. This will allow to plug RoR to many applications that have DB2 as the backend.

* Some instructions on how to download the DB2 driver is described at: http://wiki.rubyonrails.org/rails/pages/IBM+DB2

* After that, let me try to scaffold a DB2-based application: Tivoli Process Automation Engine

* cd ~/Rails
* mkdir TPAE
* cd TPAE
* rails tpae
* cd tpae/config
* vi database.yml

development:
adapter: ibm_db
database: maxdb71
username: maximo
password: itsmswat
schema: maximo
host: ismdbserver.tivlab.raleigh.ibm.com
port: 50005


* rake db:migrate --trace
* It failed with the following message:
Failed to connect to the [maxdb71] due to:
* It seems the problem is related because I forgot to load the DB2 profile
* . ~db2inst1/sqllib/db2profile
* rake db:migrate --trace
* I got a different problem:
Failed to connect to the [maxdb71] due to: [IBM][CLI Driver] CLI0133E Option type out of range. SQLSTATE=HY092 SQLCODE=-99999
* export LIBPATH=/home/db2inst1/sqllib/lib
* It seems I need to read some instructions on http://www.alphaworks.ibm.com/tech/db2onrails. Another day.

Wednesday, January 7, 2009

Ruby plugin for Eclipse

I found a Ruby plugin for Eclipse at: http://update.aptana.com/update/rails/3.2/

I had first to install Eclipse Monkey, described at http://www.brain-bakery.com/projects/articles/eclipse-monkey-scripting/. Also I had to install the Aptana Studio from: http://www.aptana.com/docs/index.php/Plugging_Aptana_into_an_existing_Eclipse_configuration

There are some instructions on how to use at: http://www.ibm.com/developerworks/opensource/library/os-rubyeclipse/

Saturday, January 3, 2009

My first experience with Ruby on Rails

It's a new year, so I decided to learn a new language / environment. I chose Ruby on Rails,so here comes my experience.

I am following the tutorial at: http://www.onlamp.com/pub/a/onlamp/2006/12/14/revisiting-ruby-on-rails-revisited.html, which has been written for Rails 1.2.6. My environment thou is Rails 2.0. So, here are the steps I followed:

  • cd ~/Rails
  • rails cookbook2
  • cd cookbook2
  • rake db:create:all
  • ./script/generate scaffold Category name:string
  • rake db:migrate
  • ./script/server
Then point your browser to http://localhost:30000. It shows the RoR welcome page.

  • Go to file config/routes.rb and file the following line:
ActionController::Routing::Routes.draw do |map|
map.root :controller=> 'categories'
map.resources :categories

Now point your browser to http://localhost:3000/categories, and you are ready to create categories!

Creating the recipe table

  • ./script/generate scaffold Recipe category:references title:string description:string instructions:text
  • rake db:migrate
Now point your browser to http://localhost:3000/recipes and you're ready to create recipes!