Always happy to help...
If I understand you correctly you are worried about security aspects - in
this case several thing could / should be done :
1. DMZ ( Demilitarized Zone )
Put your server with two NICs into a DMZ so that you have a shapr
distinction between internal and external network connections. Setup both
firewalls that even if someone hacks your server your internal network won't
be in danger...
2. Binding
Bind server processes only to necessary ports and shutdown any unneeded
services.
Implement your protocol in a way that clients need to authanticate against
your server before issuing requests.
Some sort of encryption ( from fast to very strong, even think about PKE )
would help to prevent misuse of your server...
For extreme security consider using a very different approach :
A Store and Forward approach even works without any direct connections
between clients and servers. You would need setup some sort of Messaging
based solution ( like MQ Series from IBM ) and let your clients store their
request through the MQ Series Server.
These servers just queue the requests and responses. You write handlers
which can run on as many internal servers you. The handler polls the queue
and does whatever it has to. When it is finished with the work it queues up
the response into the server...
The client just polls the response queue and takes the response out there...
This way the server accessible to the clients has very limited capabilities
and if it gets compromised it would do direct harm to any real ( =
processing ) server.
I don't know enough about the nature of your protocol. If you have something
like connect - request - long processing time - response - disconnect than
you could just use shared memory as IPC mechanism and have one or more apps
take out the pending requests from there and give the response back....
If you have between connect and disconnect multiple request/work/response
happening the same could still be used...
Perhaps the following approach is better suited if your processing needs
long time to finish :
- Every client get at first connect a "session id" ( this could be created
to be even some sort of security means )
- Every client announces at which port it will listen for responses
- Every clients sets up a listener thread on that port
Communication between your server and the clients occurs with UDP for the
sending of requests...
Responses are actively sent from the server to the client ( to the listening
port ) which could be TCP or UDP...
This way a lot more clients could be active than TCP would allow ( TCP has a
maximum of 65000 ports... ) without having any impact on your server except
the processing load...
The processing load could even be disributed internally to many processing
servers using some other IPC mechanism ( which could include communicating
via TCP or UDP on an internal NIC ).
HTH
Yahia
"Ian Davies" <
XXXX@XXXXX.COM>schrieb im Newsbeitrag
Quote
Hello Yahia,
Many thanks for the links and comments. This has really convinved me that
what I am trying to achieve is probably a bad thing.
I still need to isolate the server from the clients so that errant clients
do not kill the server. For this, I will still implement the listener as
a
separate process that communicates with the server using pipes or a
connectionless protocol of some kind. I don't believe threads will
provide
sufficient insulation from hack attempts or crashed clients but I am now
convinved that there should only be one listener for each server thereby
avoiding any issues surrounding reusing sockets.
Incidentally, do you know of any resources that describe how something
like
Oracle handles its clients. My application is much closer to a database
server than a web server as the connections are long-lived rather than the
request-response of HTTP.
Thanks again.
Ian.