There are 3 likely routes for the development of server software on the web.
Some servers could develop documented, more formal input and output formats. These could then be used by others. It's wonderfully multiprocessing, if each server can arrange to send out all of it's requests before it gets replies.
The fly in the ointment is relativity. Speed of light round trip time to Kansas and back is about 20ms - longer than disk drive access times. Present technology takes 100ms or more.
To make this work on a national scale, we need execution in the network, or at least cacheing execution servers. Then, to make all the copies of the server (executing who knows where) give correct results, we need a wide-area file system. We also need a way to pay for use of someone's server, in increments of less than a penny.
This architecture is really the ultimate in code reuse.
The web page generated by a server is complete, with a thousand options and features. You customize pages by giving information to a private database. If the server needs information from elsewhere, it generally gets it through some proprietary protocol.
This style makes less of a demand on network response time.
If applets execute in the user's browser, there will be little push on server software.