Examining The Impact On Silicon Area When Using A Custom Design Grid

Oleg Oncea from IC Mask Design analyses the area cost when using a custom design grid for layout design

Using a custom design grid in a layout offers many important advantages, but there is a potential price to pay in using them, particularly in terms of increased area. In this article we analyse the actual cost implications in detail.

Key advantages of a Layout Design Grid:

  • Improved uniformity across a design (conductor spacing, width & density)
  • Reduced lateral capacitive coupling
  • Better control of density requirements
  • Design rule clean by construction
  • Enhanced design for manufacturability
  • Reduced layout time (less requirements to zoom in/out, use of rulers, no need to be concerned about DRC fixes)
  • Easier to migrate layout from one technology to another (between foundries and/or between nodes)

Overall, grids make the layout process faster, more uniform and guarantee a DFM clean layout by construction. However, whilst these advantages are likely to reduce both development costs of and cost of final silicon, engineers should also be aware that there are also some disadvantages to using custom grids.

Key disadvantages of a Layout Design Grid:

  • Placing devices on design grid can be time consuming (automation can help)
  • Schematic designs which are not optimised for a custom design grid, can potentially be slower to layout
  • Potential increase in silicon area!

By far, the largest concern about implementing a gridded design, is the potential area cost due to using non-minimum metal, OD and poly spacing in the layout.

There are two places where conductor spacing can impact silicon area – device spacing (particularly when devices are placed as arrays) and routing channels. Here we will analyse the area cost of using a design grid in both occurrences. 

Cost analysis – device arrays

 

The boundaries above represent the device/cell area, plus half minimum conductor spacing on all sides, such that when the devices are placed down with abutting boundaries, device spacing is adhered to. By snapping a device (or cell) to a design grid, you increase the placement boundary area, effectively increasing the device area.

When the devices are placed as an array, we can analyse the area of both the minimum rule boundary and the gridded boundary:

Minimum Rule                                                                    Design Grid

                 

 

 

From this we can now determine the factor by which the area has increased:

 
 
 
                                                                                      
 
 
 

 

 
So the overall increase in area is:
 Area Increase=    

Some very important points are worth observing from this analysis:

  • The larger x is with respect to Δx and/or y is with respect to Δy, then the smaller the area increase will be. So the larger the device size, the smaller the impact will be on silicon area when using design grid.
  • If device sizes can be chosen such that y and x are already on (or close to) a design pitch, such that Δx and Δy are zero (or small), then the smaller the impact will be on silicon area.

 

Practical Example:

If we take a practical example of a minimum length MOS device on a generic 28nm technology.

• Poly to poly spacing (100nm in this case) is the dominant spacing.

• The boundary width extends for 50nm on either side of the dummy poly stripes, meaning there will be the minimum 100nm horizontal spacing between stripes of adjacent cells.

• The boundary width is 390nm

• The height of the poly stripes is 580nm

• Allowing for poly to poly vertical spacing of 100nm between cells, the boundary height is 680nm.

• Device boundary is x=390nm y=680nm

• Total device area =265,200nm²

 

The device, when arrayed up, will adhere to minimum poly to poly spacing on all sides

If we now apply a layout design grid of 80nm to this cell and its boundary, ensuring the centre of each device will be on grid and that minimum poly to poly spacing is not violated, we can analyse the cost in area. (80nm is an arbitrarily chosen value)

Currently, device centre to centre spacing is 390m in the horizontal direction (minimum rule)

 

  • To snap the devices to an 80nm grid, this spacing would need to increase by 10nm, so that centre to centre spacing would be 400nm (an integer multiple of the design grid).
  • The boundary of each cell would increase in the x axis by 5nm ( Δx=5nm)

The device spacing (centre to centre) in the vertical direction is 680nm

  • To snap the devices to an 80nm grid, this spacing would need to increase by 40nm so that centre to centre spacing would be 720nm.
  • The boundary of each cell would increase in the y axis by 20nm ( Δy=20nm)

From this increase, we can calculate the total area increase

Area Increase = 1 + Δγ + Δx     →     1 + 20nm    + 5 nm       →   1 + 0.029 + 0,0128  → 1.04
                                  γ       x                   680nm     390nm

So there is a 4% increase in area when implementing an 80nm design grid.

Minimum Rule Boundary                                    Design Grid Boundary

 

It is worth noting that with a different design grid (i.e. 65nm) , Δχ would be 0nm and Δγ would be 17.5nm, leading to a 2.5% increase in area. This increase could be reduced further by either choosing an “on grid” device width (i.e. changing from current width of 210nm to 245nm), leading to no increase in area at all. Optimising schematic designs for adherence to layout design grids, reduces area cost.

 

Cost analysis – interconnect

 

 Minimum Rule                                                               Design Grid

______________________________________________________________

Amin = W. Nw + (S x (nW -1))                               Agrid = W.Nw + ( Sgrd X (Nw – 1))

Amin = W.Nw + S.Nw – S                                      Agrid = W.Nw + Sgrid.Nw – Sgrid

Amin = ((W + S) X Nw) – S                                  Agrid =  ((W + Sgrid) x Nw) – Sgrid

For large routing channels ((W + S) x Nw) – S ≈ ((W + S) x Nw) as one wire spacing is small in the grander scheme of things.

Amin = ((W  + S) x Nw)                                      Agrid = ((W + Sgrid)xNw)

 

Area Increase =  Agrid    →  ((W + Sgrid) x Nw)      →   W + Sgrid

                               Amin            ((W + S) x Nw)                   W + S

 

 

Where

  • Width = W; Number of wires (Nwires) = Nw; Spacing = S; Grid spacing = Sgrid

Practical Example:

If we take a practical example of a generic 28nm technology where W=100nm and S=60nm, track spacing, centre to centre would be 160nm. As this is already an integer multiple of the 80nm design grid, Sgrid would also be 60nm, so there would be no increase in area.

However, if we were to choose a different design grid (for example 65nm), track spacing Sgrid, would have to be increased to 95nm (from 60nm) to ensure all tracks were centred on grid.

 

Area Increase = Agrid      →   W + Sgrid     →   100nm + 95nm    →     1.218

                             Amin                 W + S             100nm + 60 nm

 

 

In this case it would lead to a ~ 22% increase in routing area! However it is very important to note, that with increasing metal layer stacks and requirements for minimum and maximum local poly density, most routing now takes place either between the spaced devices and/or over devices, so the requirement for actual routing channels has reduced. Thus the increase in area for routing does not necessarily lend itself directly in an increase in silicon area.

Final thoughts

With the advent of multi-patterning at 20nm and FinFET device pitches at 16nm, the requirements for pitched based, uniform layout designs increases. As semiengineering.com’s Mark Lapedus confirmed when discussing 10 and 7nm design, grids are where the industry is going: “There is a general move towards track and grid based layout forms. Expect this trend to increase moving forwards.”

 

© Copyright of IC Mask Design