티스토리 뷰

Tehuti 10 Gigabits TOE SmartNIC Driver

 

make

make install

 

cd tehuti-7.33.5.2

 

cat README 파일을 읽어본다

 

modprobe 로 올리는게 아니라 vim /etc/rc.d/rc.local 에 insmod /lib/modules/`uname -r`/kernel/drivers/net/tehuti.ko 추가해주고

 

명령어를 한번 날려주면 랜카드 모듈이 올라오게 됨

 

The kernel can be instructed to use more memory (buffers, caches, socket, data) to prevent memory from becoming a bottleneck.

커널에서 병목이 생기지 않기 위해서 커널이 더 많은 메모리를 사용하도록 제어

 

cd cd tehuti-7.33.5.2/

sysctl -p sysctl_luxor.conf

 

cat sysctl_luxor.conf

 

cat sysctl_luxor.conf
# some of the defaults may be different for your kernel
# call this file with sysctl -p <this file>
# these are just suggested values that worked well to increase throughput in
# several network benchmark tests, your mileage may vary

### IPV4 specific settings
# turns TCP timestamp support off, default 1, reduces CPU use
net.ipv4.tcp_timestamps = 0
# turn SACK support off, default on
net.ipv4.tcp_sack = 0
# on systems with a VERY fast bus -> memory interface this is the big gainer
# sets min/default/max TCP read buffer, default 4096 87380 174760
net.ipv4.tcp_rmem = 10000000 10000000 10000000
# sets min/pressure/max TCP write buffer, default 4096 16384 131072
net.ipv4.tcp_wmem = 10000000 10000000 10000000
# sets min/pressure/max TCP buffer space, default 31744 32256 32768
net.ipv4.tcp_mem = 10000000 10000000 10000000

### CORE settings (mostly for socket and UDP effect)
# maximum receive socket buffer size, default 131071
net.core.rmem_max = 8388608
# maximum send socket buffer size, default 131071
net.core.wmem_max = 8388608
# default receive socket buffer size, default 65535
net.core.rmem_default = 65536
# default send socket buffer size, default 65535
net.core.wmem_default = 65536
# maximum amount of option memory buffers, default 10240
net.core.optmem_max = 524287
# number of unprocessed input packets before kernel starts dropping them, default 300
net.core.netdev_max_backlog = 300000

 

 Linux 2.6.35 and above supports a feature called RPS (Receive Packet Steering)

This feature allows spreading received packets to specified processors

In order to use this packet each interface has two configuration variables.

 

/sys/class/net/eth?/queues/rx-0/rps_cpus

/sys/class/net/eth?/queues/rx-0/rps_flow_cnt

 

cat /sys/class/net/eth2/queues/rx-0/rps_cpus

0000

cat /sys/class/net/eth2/queues/rx-0/rps_flow_cnt

 0

 

보면 드라이버 소스파일에 보면 스크립트가 하나 있다 아래와 같이 명령어를 날리고 확인하면 값이 변한다.

 

./init-rps eth2 eth3

 

cat /sys/class/net/eth2/queues/rx-0/rps_cpus

ffff

cat /sys/class/net/eth2/queues/rx-0/rps_flow_cnt

 256

 

The mask and the hash table size are automatically calculated unless specified explicitly by the user

사용자에 의해 명시적으로 지정하지 않는 한 마스크와 해시 테이블의 크기가 자동으로 계산됩니다

 

On Linux 2.6.35 and above the driver include on parameter, which is only used for testing purposes.

/sys/module/tehuti/parameters/paged_buffers

It defaults to 1, by setting it to zero we disable paged buffers, which will slow down the driver

 

1은 디폴트값이며 페이지드 버퍼 활성화, 0은 페이지드 버퍼 비활성화

'HARDWARE' 카테고리의 다른 글

Windows7 에서 SSD 옵션 설정  (0) 2015.05.22
IRQ (interrupt request line)  (0) 2015.05.22
PLANET ENW9800 / 9801 driver 설치 및 운영  (0) 2015.05.22
PCI, PCI-X, PCI-EXPRESS 차이점  (0) 2015.05.22
Power Supply 문제시 증상들  (0) 2015.05.22
scsi 및 raid 차이점?  (0) 2015.05.22
댓글
댓글쓰기 폼