The are a few things here.
1. On the back end is the disk technology that you choose. The choices are SATA, SAS, and FC. SAS and FC provide much higher random IO performance. For serial IO, the performance is close.
2. The disks are usually combined into some sort of RAID array. Different levels of RAID have different write penalties. For example, RAID 5 has a write penalty of 4 and RAID 10 has a write penalty of 2. If your workloads contains a decent percentage of writes, this could be a factor as well.
3. Then there's how you connect from the host to the array. There's block protocols like FC SAN, iSCSI SAN, and FCOE. All of these basically encapsulate SCSI to access blocks. Then there's file protocols like NFS or CIFS which are used to access files.
If your workload is mainly large sequential IO, then SATA on the backend is easy. If it's random small block IO, then you'd likely want to go SAS or FC. If the workload is mainly reads, then use RAID 5. If it has a decent percentage of writes as well, you may want to consider RAID 10. If your host connects using a file protocol, then it's either NFS or CIFS. If the host connects via a block protocol, then FC, iSCSI, or FCOE. It just depends on what you have in place and if you're willing or not to invest in a new infrastructure to support connectivity for the host to the array. You do mention SVC which would imply FC connectivity, so I'm guessing you have an existing FC SAN infrastructure. 2TB isn't big. SVC might be overkill.
XMSRE