S T Dt Continuous Equation Queue Customer Arrival Discrete Departure Time Service
Markovian Queueing Systems
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
7.11 Markovian Networks of Queues
Many complex systems can be modeled by networks of queues in which customers sequentially receive service from one or more servers, where each server has its own queue. Thus, a network of queues is a collection of individual queueing systems that have common customers. In such a system, a departure from one queueing system is often an arrival to another queueing system in the network. Such networks are commonly encountered in communication and manufacturing systems. An interesting example of networks of queues is the activities of patients in an outpatient hospital facility where no prior appointments are made before a patient can see the doctor.
In such a facility, patients first arrive at a registration booth where they are first processed on an FCFS basis. Then they proceed to a waiting room to see a doctor on an FCFS basis. After seeing the doctor, a patient may be told to go for a laboratory test, which requires another waiting, or he or she may be given a prescription to get the medication. If he or she chooses to fill the prescription from the hospital pharmacy, he or she joins another queue, otherwise, leaves the facility and gets the medication elsewhere. Each of these activities is a queueing system that is fed by the same patients. Interestingly, after a laboratory test is completed, a patient may be required to take the result back to the doctor, which means that he or she rejoins a previous queue. After seeing the doctor a second time, the patient may either leave the facility without visiting the pharmacy or will visit the pharmacy and then leave the facility.
A queueing network is completely defined when we specify the external arrival process, the routing of customers among the different queues in the network, the number of servers at each queue, the service time distribution at each queue, and the service discipline at each queue. The network is modeled by a connected directed graph whose nodes represent the queueing systems and the arcs between the nodes have weights that are the routing probabilities, where the routing probability from node A to node B is the probability that a customer who leaves the queue represented by node A will next go to the queue represented by node B with the probability labeled on the arc (A, B). If no arc exits between two nodes, then when a customer leaves one node it does not go directly to the other node; alternatively, we say that the weight of such an arc is zero.
A network of queues is classified as either open or closed. In an open network, there is at least one node through which customers enter the network and at least one node through which customers leave the network. Thus, in an open network, a customer cannot be prevented from leaving the network. Figure 7.17 is an illustration of an open network of queues.
Figure 7.17. Example of open network of queues.
By contrast, in a closed network, there are no external arrivals or departures allowed. In this case, there is a fixed number of customers who are circulating forever among the different nodes. Figure 7.18 is an illustration of a closed network of queues. Alternatively, it can be used to model a finite-capacity system in which a new customer enters the system as soon as one customer leaves the system. One example of a closed network is a computer system that at any time has a fixed number of programs that use the CPU and I/O resources.
Figure 7.18. Example of closed network of queues.
A network of queues is called a Markovian network of queues (or Markovian queueing network) if it can be characterized as follows. First, the service times at the different queues are exponentially distributed with possibly different mean values. Second, if they are open networks of queues, then external arrivals at the network are according to Poisson processes. Third, the transitions between successive nodes (or queues) are independent. Finally, the system reaches a steady state.
To analyze the Markovian network of queues, we first consider a very important result called the Burke's output theorem by Burke (1956).
7.11.1 Burke's Output Theorem and Tandem Queues
Burke proved that the departure process of an M/M/ c queue is Poisson. Specifically, Burke's theorem states that for an M/M/c queue in the steady state with arrival rate , the following results hold:
- a.
-
The departure process is a Poisson process with rate . Thus, the departure process is statistically identical to the arrival process.
- b.
-
At each time t, the number of customers in the system is independent of the sequence of departure times prior to t.
An implication of this theorem is as follows. Consider a queueing network with N queues in tandem, as shown in Figure 7.19. The first queue is an M/M/1 queue through which customers arrive from outside with rate . The service rate at queue i is , such that . That is, the system is stable. Thus, the other queues are x/M/1 queues, and for now we assume that we do not know precisely what "x" is. According to Burke's theorem, the arrival process at the second queue, which is the output process of the first queue, is Poisson with rate . Similarly, the arrival process at the third queue, which is the output process of the second queue, is Poisson with rate , and so on. Thus, x=M and each queue is essentially an M/M/1 queue.
Figure 7.19. A network of N queues in tandem.
Assume that customers are served on an FCFS basis at each queue, and let denote the steady-state number of customers at queue . Since a departure at queue i results in an arrival at queue , it can be shown that the joint probability of queue lengths in the network is given by
(7.55)
Since the quantity denotes the steady-state probability that the number of customers in an M/M/1 queue with utilization factor is , the above result shows that the joint probability distribution of the number of customers in the network of N queues in tandem is the product of the N individual probability distributions. That is, the numbers of customers at distinct queues at a given time are independent. For this reason, the solution is said to be a product-form solution, and the network is called a product-form queueing network.
A further generalization of this system of tandem queues is a feed-forward queueing network, where customers can enter the network at any queue but a customer cannot visit a previous queue, as shown in Figure 7.20.
Figure 7.20. A feed-forward queueing network.
Let denote the external Poisson arrival rate at queue i, and let denote the probability that a customer that has finished receiving service at queue i will next proceed to queue j, where denotes the probability that the customer leaves the network after receiving service at queue i. Then the rate at which customers arrive at queue j (from both outside and from other queues in the network) is given by
(7.56)
It can be shown that if the network is stable (i.e., for all j), then each queue is essentially an M/M/1 queue and a product-form solution exists. That is,
When a customer is allowed to visit a previous queue (i.e., a feedback loop exists), the output process of an M/M/1 queue is no longer Poisson because the arrival process and the departure process are no longer independent. For example, if with probability 0.9 a customer returns to the queue after receiving service, then after each service completion there is a very high probability of an arrival (from the same customer that was just served). Markovian queueing networks with feedback are called Jackson networks.
7.11.2 Jackson or Open Queueing Networks
As stated earlier, Jackson networks allow feedback loops, which means that a customer can receive service from the same server multiple times before exiting the network. The service time at each queue is exponentially distributed. As previously discussed, because of the feedback loop, the arrival process at each such queue is no longer Poisson. The question becomes this: Is a Jackson network a product-form queueing network?
Consider an open network of N queues in which customers arrive from outside the network to queue i according to a Poisson process with rate , and queue i has identical servers. The time to serve a customer at queue i is exponentially distributed with mean . After receiving service at queue i, a customer proceeds to queue j with probability or leaves the network with probability , where
Thus, , the aggregate arrival rate at queue i, is given by
(7.57)
Let , and let P be the routing matrix . Thus, we may write
Because the network is open and any customer in any queue eventually leaves the network, each entry of the matrix converges to zero as . This means that the matrix is invertible and the solution of the preceding equation is
We assume that the system is stable, which means that for all i. Let be a random variable that defines the number of customers at queue i in the steady state. We are interested in joint probability mass function . Jackson's theorem (Jackson, 1963) states that this joint probability mass function is given by the following product-form solution:
where is the steady-state probability that there are customers in an M/M/c i queueing system with arrival rate and service rate . That is, the theorem states that the network acts as if each queue i is an independent M/M/c i queueing system. Equivalently, we may write as follows for the case of M/M/1 queueing system:
(7.58a)
(7.58b)
Example 7.7
Consider an open Jackson network with , as shown in Figure 7.21. This can be used to model a computer system in which new programs arrive at a CPU according to a Poisson process with rate . After receiving service with an exponentially distributed time with a mean of at the CPU, a program proceeds to the I/O with probability p or leaves the system with probability . At the I/O, the program receives an exponentially distributed service with a mean of and immediately returns to the CPU for further processing. Assume that programs are processed in a FIFO manner.
Figure 7.21. An example of open Jackson network.
Solution
The aggregate arrival rates of programs to the two queues are
From these two equations, we obtain the solutions and . Thus, since , we have that
where and .
7.11.3 Closed Queueing Networks
A closed network, which is also called a Gordon–Newell network or a closed Jackson queueing network, is obtained by setting and for all i. It is assumed that K customers continuously travel through the network and each node i has server. Gordon and Newell (1967) proved that this type of network also has a product-form solution of the form
(7.59)
where
and is any nonzero solution of the equation , where P is the routing matrix
Note that the flow balance equation can be expressed in the form . Thus, the vector is the eigenvector of the matrix corresponding to its zero eigenvalue. This means that the traffic intensities can only be determined to within a multiplicative constant. However, while the choice of influences the computation of the normalizing constant , it does not affect the occupancy probabilities.
depends on the values of N and K. The possible number of states in a closed queueing network with N queues and K customers is
Thus, it is computationally expensive to evaluate the parameter even for a moderate network. An efficient recursive algorithm called the convolution algorithm is used to evaluate for the case where the is constant. Once is evaluated, the relevant performance measures can be obtained. The convolution algorithm and other algorithms for evaluating are discussed in many texts such as Ibe (2011).
Example 7.8
Consider the two-queue network shown in Figure 7.22. Obtain the joint probability when and .
Figure 7.22. An example of closed Jackson network.
Solution
Here and
Thus, if , then
One nonzero solution to the equation is . Thus,
From this we obtain
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000074
Queueing Systems
Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011
9.5.2 Two Queues in Tandem
Let us use Theorem 9.1 to analyze a simple queueing network composed of two single-server queues connected in series as shown in Figure 9.9.
Figure 9.9. Two queues in series in which the departures from the first form the arrivals for the second.
Let Xk (t) be the number of customers in the k th queue at time t. We assume steady state. Beginning with the first server, the stationary distribution (9.11) for a single server queue applies, and
Theorem 9.1 asserts that the departure process from the first server, denoted by D 1(t), is a Poisson process of rate λthat is statistically independent of the first queue length X1(t). These departures form the arrivals to the second server, and therefore the second system has Poisson arrivals and is thus an M/M/ 1 queue as well. Thus, again using (9.11),
Furthermore, because the departures D 1(t) from the first server are independent of X 1(t), it must be that X 2(t) is independent of X 1(t). We, thus, obtain the joint distribution
We again caution the reader that the foregoing analysis applies only when the network is in its limiting distribution. In contrast, if both queues are empty at time t = 0, then neither will the departures D 1(t) form a Poisson process nor will D 1(t) and X 1(t) be independent.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123814166000095
Queueing Theory
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
8.9.3 The G/M/k Queue
In this model we again suppose that there are k servers, each of whom serves at an exponential rate μ. However, we now allow the time between successive arrivals to have an arbitrary distribution G. To ensure that a steady-state (or limiting) distribution exists, we assume the condition where is the mean of G. 6
The analysis for this model is similar to that presented in Section 8.7 for the case . Namely, to avoid having to keep track of the time since the last arrival, we look at the system only at arrival epochs. Once again, if we define as the number in the system at the moment of the nth arrival, then is a Markov chain.
To derive the transition probabilities of the Markov chain, it helps to first note the relationship
where denotes the number of departures during the interarrival time between the nth and st arrival. The transition probabilities can now be calculated as follows:
Case 1
.
In this case it easily follows that .
Case 2
.
In this case if an arrival finds i in the system, then as the new arrival will also immediately enter service. Hence, the next arrival will find j if of the services exactly are completed during the interarrival time. Conditioning on the length of this interarrival time yields
where the last equality follows since the number of service completions in a time t will have a binomial distribution.
Case 3
.
To evaluate in this case we first note that when all servers are busy, the departure process is a Poisson process with rate kμ (why?). Hence, again conditioning on the interarrival time we have
Case 4
.
In this case since when all servers are busy the departure process is a Poisson process, it follows that the length of time until there will only be k in the system will have a gamma distribution with parameters (the time until events of a Poisson process with rate kμ occur is gamma distributed with parameters ). Conditioning first on the interarrival time and then on the time until there are only k in the system (call this latter random variable ) yields
where the last equality follows since of the k people in service at time s the number whose service will end by time t is binomial with parameters k and .
We now can verify either by a direct substitution into the equations , or by the same argument as presented in the remark at the end of Section 8.7, that the limiting probabilities of this Markov chain are of the form
Substitution into any of the equations when yields that β is given as the solution of
The values can be obtained by recursively solving the first of the steady-state equations, and c can then be computed by using .
If we let denote the amount of time that a customer spends in queue, then in exactly the same manner as in we can show that
where Exp is an exponential random variable with rate .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128143469000135
The Exponential Distribution and the Poisson Process
Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010
Example 5.25
(The Output Process of an Infinite Server Poisson Queue) It turns out that the output process of the M/G/∞ queue—that is, of the infinite server queue having Poisson arrivals and general service distribution G—is a nonhomogeneous Poisson process having intensity function λ(t) = λG(t ). To verify this claim, let us first argue that the departure process has independent increments. Towards this end, consider nonoverlapping intervals O1, …, Ok; now say that an arrival is type i, i = 1, …, k, if that arrival departs in the interval Oi.
By Proposition 5.3, it follows that the numbers of departures in these intervals are independent, thus establishing independent increments. Now, suppose that an arrival is "counted"if that arrival departs between t and t + h. Because an arrival at time s, s < t + h, will be counted with probability G(t − s + h) − G(t − s), it follows from Proposition 5.3 that the number of departures in (t, t + h) is a Poisson random variable with mean
Therefore,
and
which completes the verification.
If we let Sn denote the time of the n th event of the nonhomogeneous Poisson process, then we can obtain its density as follows:
which implies that
where
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012375686200008X
Queueing Theory
Sheldon Ross , in Introduction to Probability Models (Eleventh Edition), 2014
8.9.3 The G/M/k Queue
In this model we again suppose that there are servers, each of whom serves at an exponential rate . However, we now allow the time between successive arrivals to have an arbitrary distribution . To ensure that a steady-state (or limiting) distribution exists, we assume the condition where is the mean of . ∗
The analysis for this model is similar to that presented in Section 8.7 for the case . Namely, to avoid having to keep track of the time since the last arrival, we look at the system only at arrival epochs. Once again, if we define as the number in the system at the moment of the th arrival, then is a Markov chain.
To derive the transition probabilities of the Markov chain, it helps to first note the relationship
where denotes the number of departures during the interarrival time between the th and st arrival. The transition probabilities can now be calculated as follows:
Case 1
.
In this case it easily follows that .
Case 2
.
In this case if an arrival finds in the system, then as the new arrival will also immediately enter service. Hence, the next arrival will find if of the services exactly are completed during the interarrival time. Conditioning on the length of this interarrival time yields
where the last equality follows since the number of service completions in a time will have a binomial distribution.
Case 3
.
To evaluate in this case we first note that when all servers are busy, the departure process is a Poisson process with rate (why?). Hence, again conditioning on the interarrival time we have
Case 4
.
In this case since when all servers are busy the departure process is a Poisson process, it follows that the length of time until there will only be in the system will have a gamma distribution with parameters (the time until events of a Poisson process with rate occur is gamma distributed with parameters ). Conditioning first on the interarrival time and then on the time until there are only in the system (call this latter random variable ) yields
where the last equality follows since of the people in service at time the number whose service will end by time is binomial with parameters and .
We now can verify either by a direct substitution into the equations , or by the same argument as presented in the remark at the end of Section 8.7, that the limiting probabilities of this Markov chain are of the form
Substitution into any of the equations when yields that is given as the solution of
The values can be obtained by recursively solving the first of the steady-state equations, and can then be computed by using .
If we let denote the amount of time that a customer spends in queue, then in exactly the same manner as in we can show that
where Exp is an exponential random variable with rate .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124079489000086
Markov Processes
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
9.6 Engineering Application: A Telephone Exchange
Consider a base station in a cellular phone system. Suppose calls arrive at the base station according to a Poisson process with some arrival rate λ. These calls are initiated by mobile units within the cell served by that base station. Furthermore, suppose each call has a duration that is an exponential random variable with some mean, 1/μ. The base station has some fixed number of channels, m, that can be used to service the demands of the mobiles in its cell. If all m channels are being used, any new call that is initiated cannot be served and the call is said to be blocked. We are interested in calculating the probability that when a mobile initiates a call, the customer is blocked.
Since the arrival process is memoryless and the departure process is memoryless, the number of calls being serviced by the base station at time t, X(t), is a birth−death Markov process. Here, the arrival rate and departure rates (given there are n channels currently being used) are given by
(9.69)
The steady-state distribution of this Markov process is given by Equation (9.44). For this example, the distribution is found to be
(9.70)
The blocking probability is just the probability that when a call is initiated, it finds the system in state m. In steady state, this is given by πm and the resulting blocking probability is the so-called Erlang-B formula,
(9.71)
This equation is plotted in Figure 9.7 for several values of m. The horizontal axis is the ratio of λ/ μ which is referred to in the telephony literature as the traffic intensity. As an example of the use of this equation, suppose a certain base station had 60 channels available to service incoming calls. Furthermore, suppose each user initiated calls at a rate of 1 call per 3 hours and calls had an average duration of 3 minutes (0.05 hours). If a 2% probability of blocking is desired, then from Figure 9.7 we determine that the system can handle a traffic intensity of approximately 50 Erlangs. Note that each user generates an intensity of
Figure 9.7. The Erlang-B formula.
(9.72)
Hence, a total of 50*60 = 3000 mobile users per cell could be supported while still maintaining a 2% blocking probability.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500126
Markov Processes
Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004
9.4 Continuous Time Markov Processes
In this section, we investigate Markov processes where the time variable is continuous. In particular, most of our attention will be devoted to the so-called birth-death processes which are a generalization of the Poisson counting process studied in the previous chapter. To start with, consider a random process X(t) whose state space is either finite or countable infinite so that we can represent the states of the process by the set of integers, X(t) ɛ {…, −3, −2, −1,0,1,2,3, …}. Any process of this sort that is a Markov process has the interesting property that the time between any change of states is an exponential random variable. To see this, define Ti to be the time between the ith and the (i + 1)th change of state and let hi (t) be the complement to its CDF, . Then, for t > 0, s > 0,
(9.31)
Due to the Markovian nature of the process, , and hence the previous equation simplifies to
(9.32)
The only function which satisfies this type of relationship for arbitrary t and s is an exponential function of the form for some constant ρi.
Furthermore, for this function to be a valid probability, the constant ρi must not be negative. From this, the PDF of the time between change of states is easily found to be .
As with discrete time Markov chains, the continuous time Markov process can be described by its transition probabilities.
DEFINITION 9.11: Define pi, j (t) = Pr(X(to + t) = j|X(to ) = i) to be the transition probability for a continuous time Markov process. If this probability does not depend on to , then the process is said to be a homogeneous Markov process.
Unless otherwise stated, we assume for the rest of this chapter that all continuous time Markov processes are homogeneous. The transition probabilities, pi, j (t), are somewhat analogous to the n-step transition probabilities used in the study of discrete time processes and as a result, these probabilities satisfy a continuous time version of the Chapman-Kolmogorov equations:
(9.33)
One of the most commonly studied class of continuous time Markov processes is the birth-death process. These processes get their name from applications in the study of biological systems, but they are also commonly used in the study of queueing theory, and many other applications. The birth-death process is similar to the discrete time random walk studied in the previous section in that when the process changes states, it either increases by 1 or decreases by 1. As with the Poisson counting process, the general class of birth-death processes can be described by the transition probabilities over an infinitesimal period of time, Δt. For a birth-death process,
(9.34)
The parameter λi is called the birth rate, while μi is the death rate when the process is in state i. In the context of queueing theory, λi and μi are referred to as the arrival and departure rates, respectively.
Similar to what was done with the Poisson counting process, by letting s = Δ t in Equation 9.33 and then applying the infinitesimal transition probabilities, a set of differential equations can be developed that will allow us to solve for the general transition probabilities. From Equation 9.33,
(9.35)
Rearranging terms and dividing by Δt produces
(9.36)
Finally, passing to the limit as Δt → 0 results in
(9.37)
This set of equations is referred to as the forward Kolmogorov equations. One can follow a similar procedure (see Exercise 9.24) to develop a slightly different set of equations known as the backward Kolmogorov equations:
(9.38)
For all but the simplest examples, it is very difficult to find a closed form solution for this system of equations. However, the Kolmogorov equations can lend some insight into the behavior of the system. For example, consider the steady state distribution of the Markov process. If a steady state exists, we would expect that as t→ ∞, pi, j (t) → πj independent of i and also that dpi, j(t)/(dt) → 0. Plugging these simplifications into the forward Kolmogorov equations leads to
(9.39)
These equations are known as the global balance equations. From them, the steady state distribution can be found (if it exists). The solution to the balance equations is surprisingly easy to obtain. First, we rewrite the difference equation in the more symmetric form
(9.40)
Next, assume that the Markov process is defined on the states j = 0,1,2, …. Then the previous equation must be adjusted for the end point j = 0 (assuming μ0 = 0 which merely states that there can be no deaths when the population's size is zero) according to
(9.41)
Combining Equations 9.40 and 9.41 results in
(9.42)
which leads to the simple recursion
(9.43)
whose solution is given by
(9.44)
This gives the π j in terms of π0. In order to determine π0, the constraint that the πj must form a distribution is imposed.
(9.45)
This completes the proof of the following theorem.
THEOREM 9.4: For a Markov birth-death process with birth rate λn, n = 0,1,2, …, and death rate μn, n = 1,2,3, …, the steady state distribution is given by
(9.46)
If the series in the denominator diverges, then πk = 0 for any finite k. This indicates that a steady state distribution does not exist. Likewise, if the series converges, the πk will be nonzero, resulting in a well-behaved steady state distribution.
EXAMPLE 9.12: (The M/M/1 Queue) In this example, we consider the birth-death process with constant birth rate and constant death rate. In particular, we take
This model is commonly used in the study of queueing systems and, in that context, is referred to as the M/M/1 queue. In this nomenclature, the first "M" refers to the arrival process as being Markovian, the second "M" refers to the departure process as being Markovian, and the "1" is the number of servers. So this is a single server queue, where the interarrival time of new customers is an exponential random variable with mean 1/λ and the service time for each customer is exponential with mean 1/μ. For the M/M/1 queueing system, λ i–1/μ i = λ/μ for all i so that
The resulting steady state distribution of the queue size is then
Hence, if the arrival rate is less than the departure rate, the queue size will have a steady state. It makes sense that if the arrival rate is greater than the departure rate, then the queue size will tend to grow without bound.
EXAMPLE 9.13: (The M/M/ ∞ Queue) Next suppose the last example is modified so that there are an infinite number of servers available to simultaneously provide service to all customers in the system. In that case, there are no customers ever waiting in line, and the process X(t) now counts the number of customers in the system (receiving service) at time t. As before, we take the arrival rate to be constant λn = λ, but now the departure rate needs to be proportional to the number of customers in service, μ n = n μ. In this case, λ i-1/μ i = λ/(i μ) and
Note that the series converges for any λ and μ, and hence the M/M/∞ queue will always have a steady state distribution given by
EXAMPLE 9.14: This example demonstrates one way to simulate the M/M/1 queueing system of Example 9.12. One realization of this process as produced by the code that follows is illustrated in Figure 9.4. In generating the figure, we use an average arrival rate of λ = 20 customers per hour and an average service time of 1/μ = 2 minutes. This leads to the condition λ < μ and the M/M/1 queue exhibits stable behavior. The reader is encouraged to run the program for the case when λ > μ to observe the unstable behavior (the queue size will tend to grow continuously over time).
Figure 9.4. Simulated realization of the birth-death process for M/M/1 queueing system of Example 9.12.
If the birth-death process is truly modeling the size of a population of some organism, then it would be reasonable to consider the case when λ0 = 0. That is, when the population size reaches zero, no further births can occur. In that case, the species is extinct and the state X(t) = 0 is an absorbing state. A fundamental question would then be, Is extinction a certain event, and if not what is the probability of the process being absorbed into the state of extinction? Naturally the answer to this question would depend on the starting population size. Let qi be the probability that the process eventually enters the absorbing state, given that it is initially in state i. Note that if the process is currently in state i, after the next transition, the birth-death process must be either in state i – 1 or state i + 1. The time to the next birth, Bi, is a random variable with an exponential distribution with a mean of 1/λ i , while the time to the next death is an exponential random variable, Di, with a mean of 1/μ i . Hence, the process will transition to state i + 1 if Bi < Di, otherwise it will transition to state i – 1. The reader can easily verify that Pr(Bi < Di ) = λ i /(λ i + μ i ). The absorption probability can then be written as
(9.47)
This provides a recursive set of equations that can be solved to find the absorption probabilities. To solve this set of equations, we rewrite them as
(9.48)
After applying this recursion repeatedly and using the fact that q 0 = 1,
(9.49)
Summing this equation from i = 1,2, …, n results in
(9.50)
Next, suppose that the series on the right-hand side of the previous equation diverges as n → ∞. Since the qi are probabilities, the left-hand side of the equation must be bounded, which implies that q 1 = 1. Then from Equation 9.49, it is determined that qn must be equal to one for all n. That is, if
(9.51)
then absorption will eventually occur with probability 1 regardless of the starting state. If q1 > 1 (absorption is not certain), then the preceding series must converge to a finite number. It is expected in that case that as n → ∞, qn → 0. Passing to the limit as n → ∞ in Equation 9.50 then allows a solution for q 1 of the form
(9.52)
Furthermore, the general solution for the absorption probability is
(9.53)
EXAMPLE 9.15: Consider a population model where both the birth and death rates are proportional to the population, λ n = nλ, μ n = nμ. For this model,
Hence, if λ < μ, the series diverges and the species will eventually reach extinction with probability 1. If λ > μ,
and the absorption (extinction) probabilities are
Continuous time Markov processes do not necessarily need to have a discrete amplitude as in the previous examples. In the following, we discuss a class of continuous time, continuous amplitude Markov processes. To start with, it is noted that for any time instants t 0 < t 1 < t 2, the conditional PDF of a Markov process must satisfy the Chapman-Kolmogorov equation
(9.54)
This is just the continuous amplitude version of Equation 9.33. Here we use the notation f (x 2, t 2|x1, t 1) to represent the conditional probability density of the process X(t 2) at the point x 2 conditioned on X(t 1) = x 1. Next, suppose we interpret these time instants as t 0 = 0, t 1 = t, and t 2 = t + Δt. In this case, we interpret x 2 – x 1 = Δx as the the infinitesimal change in the process that occurs during the infinitesimal time instant Δt and f (x 2, t 2|x 1, t 1) is the PDF of that increment.
Define to be the characteristic function of Δx = x 2 – x 1:
(9.55)
We note that the characteristic function can be expressed in a Taylor series as
(9.56)
where Mk(x1, t) = E[(x2 - x1)k (x1, t)] is the kth moment of the increment Δx. Taking inverse transforms of this expression, the conditional PDF can be expressed as
(9.57)
Inserting this result into the Chapman-Kolmogorov equation, Equation 9.54, results in
Subtracting f (x 2, t|x0, t 0) from both sides of this equation and dividing by Δt results in
(9.59)
Finally, passing to the limit as Δt → 0 results in the partial differential equation
(9.60)
where the function Kk (x, t) is defined as
(9.61)
For many processes of interest, the PDF of an infinitesimal increment can be accurately approximated from its first few moments, and hence we take Kk (x, t) = 0 for k > 2. For such processes, the PDF must satisfy
(9.62)
This is known as the (one-dimensional) Fokker-Planck equation and is used extensively in diffusion theory to model the dispersion of fumes, smoke, and similar phenomena.
In general, the Fokker-Planck equation is notoriously difficult to solve and doing so is well beyond the scope of this text. Instead, we consider a simple special case where the functions K 1(x, t) and K 2(x, t) are constants, in which case the Fokker Planck equation reduces to
(9.63)
where in diffusion theory, D is known as the coefficient of diffusion and c is the drift. This equation is used in models that involve the diffusion of smoke or other pollutants in the atmosphere, the diffusion of electrons in a conductive medium, the diffusion of liquid pollutants in water and soil, and the diffusion of plasmas. This equation can be solved in several ways. Perhaps one of the easiest methods is to use Fourier transforms. This is explored further in the exercises where the reader is asked to show that (taking x0 = 0 and t0 = 0) the solution to this diffusion equation is
(9.64)
That is, the PDF is Gaussian with a mean and variance that changes with time. The behavior of this process is explored in the next example.
EXAMPLE 9.16: In this example, we model the diffusion of smoke from a forest fire that starts in a National Park at time t = 0 and location x = 0. The smoke from the fire drifts in the positive x direction due to wind blowing at 10 miles per hour, and the diffusion coefficient is 1 square mile per hour. The probability density function is given in Equation 9.64. We provide a three-dimensional rendition of this function in Figure 9.5 using the following MATLAB program.
Figure 9.5. Observations of the PDF at different time instants showing the drift and dispersion of smoke for Example 9.16.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121726515500099
Network of Queues
J. MEDHI , in Stochastic Models in Queueing Theory (Second Edition), 2003
Note 1:
An alternative proof of Jackson's theorem (using the concept of partial balance) has been given by Lemoine (1977).
Note 2:
The equilibrium-queue-length distribution in a Jackson network is of the product form.
Note 3:
Jackson's theorem contains the remarkable result that whenever the equilibrium condition exists, each node in the network behaves as if it were an independent M/M/ci queue with Poisson input. It is implied that the RV ni representing the number of states in individual nodes i (i = 1,2, …, k) in steady state are independent random variables.
Note 4:
In equilibrium, the external departure streams or outputs from individual nodes are independent Poisson processes with rate qiαi for node i, i = 1,2, …, k. The relation (5.3.14) shows that Σ λi = Σ αiqi. That is, under equilibrium the total external input flow rate is equal to the total external output flow rate. However, the arrival and departure processes at an M/M/1 node are not always equivalent (Walrand, 1982a).
Note 5:
In general, the total input process into an individual node is not necessarily Poisson, even under equilibrium conditions. For a demonstration of this, see Gelenbe and Mitrani (1980, pp. 85–86). Here non-Poisson total arrivals or inputs see time averages, and this accounts for the simple state probability results.
Note 6:
The traffic equations
have a unique solution. Writing α = (α1 …, αk), λ = (λ1,…, λk)P = (pij), we get
(5.3.16)
Now, since in an open network any customer in any queue eventually leaves the network, it follows that each element of P n converges to 0 as n → ∞. Thus, ( I − P )-1 converges; that is ( I − P ) has an inverse, so that the rank of ( I − P ) is k. This demonstrates that α = λ + α P , for given λ and P has a unique solution in α, P is called the transfer, switching, or routing matrix. The square matrix P from an open network is not stochastic.
If the outside world (the source from which inputs come and the sink to which external outputs go) is considered as node "O," then the square matrix with one more column and row will be stochastic.
Note 7:
Consider a state N = (n 1, …, ni , …, nk ) of the network in equilibrium. For each node i, the rate of flow out of state N due to a departure of a customer from node i is equal to the rate of flow into the state N due to the arrival of a customer into node i due to either external input or internal transfer. This gives local-balance relations; Eq. (5.3.9) involving the rates of flow of the complete network constitutes the global-balance equation.
The local-balance equation can be obtained by equating the rate of flow out of state (n 1, …, ni , …, nk ) due to a customer leaving node i with the rate of flow into state (n 1, …, ni , …, nk ) due to arrival of a customer to node i, either from outside or from some of the other internal nodes. We have
Assuming that p is of the form (5.3.3), and using (5.3.10)–(5.3.12), we get the following form of local-balance equation:
Note 8:
In notes (3) and (5) we observe that, although the equilibrium distribution has the product form as if the service facilities were independent and all arrival processes were Poisson, the arrival process within the network is, in general, not Poisson. Further, the facilities and the associated queue-length processes (time-dependent) are not independent (Melamed, 1979).
This shows that equilibrium distribution does not capture the transient or time-dependent behavior. Similar behavior is also noted in the case of a G/M/1 queue. This points to the limitations of the steady-state result. (See also Section 2.5.)
Note 9:
Goodman and Massey (1984) have generalized Jackson's theorem to the nonergodic case. Their results completely characterize the large-time behavior (t → ∞) of Jackson networks.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124874626500059
Stochastic optimization approaches for elective surgery scheduling with downstream capacity constraints: Models, challenges, and opportunities
Karmel S. Shehadeh , Rema Padman , in Computers & Operations Research, 2022
4.2.3 Opportunities
Clearly, stochastic Seq_Sched with PACU is not well-studied in the literature, and there are many open questions and challenges to tackle. First, we need data-driven models to characterize and analyze the distribution of the blocking problem. OR blocking may be impossible to eliminate entirely, especially when PACU capacity is tight. And so, by analyzing the distribution of blocking, we could identify an acceptable and realistic upper bound for blocking time. This will limit the scope of the optimization problem from eliminating blocking to controlling blocking under a realistic upper bound.
Second, there is a need to design optimal and implementable admission policy to the PACU. The First-Come-First-Served (FCFS) and Critical-Patient-First (CPF) are two of the well-known and easy-to-implement polices. However, no research has investigated the (sub-)optimality of these policies rigorously. Thus, we need data-driven, distribution-free models that can jointly optimize surgery scheduling decisions and PACU admission policy while providing a good trade-off between tractability and implementability. In theory, optimizing both the scheduling decisions and PACU admission policy requires formulating the problem as a multi-stage DRO model (MDROM) with risk-averse or risk-neutral objectives and/or constraints (or a multi-stage SP model if the distribution of uncertainty is known). The first stage of the MDROM pertains to deciding the number of patients to schedule, their assignments to surgical blocks, and the sequencing of their start times. In each stage (e.g., completion of one or multiple surgeries simultaneously, departure of a patient from PACU, etc.) after the first stage, the approach chooses the worst-case probability distribution from the ambiguity set under which the uncertain duration and LOS in PACU are observed and then optimizes a policy that grants admission to PACU, block a patient in OR until a bed is available, or cancel one of the remaining surgeries based on the information available up to (and including) the current stage. This process continues until the last stage is reached.
Given the difficulty in solving the existing two-stage SP models and the possible ambiguity of the number of MDROM stages, such MDROM approaches may be intractable. Next, we provide some additional techniques to formulate the MDROM and deal with some anticipated challenges:
- •
-
Composite variable modeling (CVM).Bai et al. (2017) andBai et al. (2020) models are computationally challenging to solve in part due to a large number of binary variables and complicated constraints. A composite variable is a binary variable that encompass multiple elemental decisions (Cohn, 2002). CVM is a well-known method to reduce the size, eliminate complicating constraints, and strengthen the LP relaxation of combinatorial optimization problems. For example, instead of using two sets of binary variables to represent the precedence and sequencing decisions or a TSP tour to represent surgery sequence, as inBai et al. (2020) formulation, we can use one set of binary position assignment variables, which implicitly determine the precedence relationship between surgeries in each block.Keha et al. (2009) andShehadeh et al. (2019) demonstrate that position assignment-based reformulations of precedence-based formulations for single machine (i.e., one provider or OR) stochastic sequencing problems without downstream PACU constraints improves LP relaxations and solvability of these problems.
- •
-
Variable transformation. As detailed in Section4.1.3,Shehadeh and Padman (2021) used variable transformation to transform LOS in SICU (and thus start and completion time in SICU) to an arrival–departure process to–from SICU, which helped them to derive a tractable DR-MILP model. Recall that LOS in SICU is in the range of days, and LOS in PACU is in the range of hours. Thus, one can leverage similar variable transformation ideas as in Shehadeh and Padman (2021) (which is based on the work ofNeyshabouri and Berg (2017)) to transform LOS in PACU to an arrival–departure process to–from PACU. Then, formulate a DRO or SP model for this arrival–departure process. This may eliminate the need for the challenging-to-solve large-scale, time-indexed formulation, which computes patients' actual start and completion times in OR and PACU (the arrival/departure times can be used to compute the time spent in these units).
- •
-
Two-stage approximation. The mathematically optimal admission policy to PACU resulting from solving the multi-stage model may not be implementable in practice. To design and test feasible and implementable distributionally robust PACU admission policies, one can start by evaluating the conditions under which some of the well-known, feasible, and easy-to-implement policies such as the First-Come-First-Serve (FCFS) and Critical-Patient-First (CPF) are (sub-)optimal. One approach is to implement each feasible policy in a two-stage DRO model, enforcing non-anticipativity and yielding upper bounds on the multi-stage DRO (minimization) model's optimal value. By relaxing the non-anticipativity of the multi-stage model, which may make it easier to solve, we can obtain a lower bound. By evaluating the gap between these bounds, we can obtain a better sense of which policy is near-optimal or provide a tighter upper bound on the mathematically optimal distributionally robust PACU admission policy. Recently,Shehadeh et al. (2020b) presented a similar two-stage approximation idea to derive the first near-optimal, easy-to-implement adaptive policy for the multi-stage resequencing and rescheduling problem of unpunctual arrivals in outpatient clinics.
Finally, from a theoretical perspective, it also worth investigating and comparing risk-averse and risk-neutral DRO (and other stochastic optimization) models for Seq_Sched with PACU capacity that incorporate multi-modal ambiguity sets of surgery durations and other random parameters that might be observed in outpatient procedure centers, such as no-shows and arrival times (Ahmadi-Javid et al., 2017). More broadly, Seq_Sched is an embedded sub-problem in an integrated model for elective surgery selection, assignment, and scheduling with PACU constraints. Thus, by designing efficient SO techniques to solve Seq_Sched, we can find efficient methods to solve an integrated model for elective surgery scheduling with downstream capacity constraints.
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S0305054821002628
Applications of stochastic modeling in air traffic management: Methods, challenges and opportunities for solving air traffic problems under uncertainty
Rob Shone , ... Konstantinos G. Zografos , in European Journal of Operational Research, 2021
3.3 Airport surface operations and departure control
Most of the queueing models discussed in this paper so far have been related to runway operations (take-offs and landings), without explicit consideration of the fine-grain operations involved in maneuvering aircraft so that they are ready to join a runway queue. However, bottlenecks can also occur away from the runways; for example, departing aircraft might experience delays caused by congestion on the airport taxiways. This section examines how stochastic modeling has been used with respect to airport ground operations, with a particular focus on aircraft departure processes.
As noted in Section 2, queues have been used extensively to model queues of arrivals and departures at airports. Although empirical studies have been provided to support the assumption of Poisson demand processes for arrivals (Dunlay & Horonjeff, 1976; Willemain et al., 2004), we are not aware of any similar attempt to validate the Poisson model for airport departures. The main factors that affect aircraft departure times have been incorporated within simulation studies (Clarke, Melconian, Bly, & Rabbani, 2007; Shumsky, 1995) and statistical models for prediction (Carr, 2004; Idris, Clarke, Bhuva, & Kang, 2002). These factors include gate departure delays (which can be caused by passenger delays, crew scheduling issues, mechanical failures etc.), interaction effects between different runways (which are particularly relevant if runways intersect each other, or if they are parallel but in close vicinity of each other) and adverse weather conditions. Recently, Badrinath, Balakrishnan, Joback, and Reynolds (2020) have provided motivation for stochastic modeling approaches by demonstrating the significant impact of demand-related uncertainty on airport surface operations.
Pujet, Delcaire, and Feron (1999) (see also Andersson, Carr, Feron, & Hall, 2000) used a data-driven queueing model for airport departures, with stochasticity introduced via the use of Gaussian distributions to model push-back durations, taxiing speeds and other factors. Simaiakis and Balakrishnan (2009) also used data-calibrated Gaussian random variables to model 'unimpeded' taxi-out times of aircraft, with actual taxi-out times obtained by including the effects of congestion. Subsequently, Simaiakis and Balakrishnan (2016) extended this work by developing and testing a model that predicts runway schedules and take-off times in response to a given aircraft push-back schedule. Notably, their approach makes use of a queueing model. Aircraft travel times between departure gates and runways are estimated via a separate procedure in order to generate an expected runway schedule, which then provides the deterministic, time-varying demand rates for the stochastic queueing model. The use of Erlang-distributed service times in their model is supported by an earlier empirical study (Simaiakis & Balakrishnan, 2013) which demonstrates the advantages of such an approach. Other researchers have used alternative methods for predicting aircraft taxi-out and departure delays under uncertainty; Balakrishna, Ganesan, and Sherry (2010) employed reinforcement learning (RL) methods, while Ravizza, Atkin, Maathuis, and Burke (2013) have used regression-based analyses.
The research described above is related to the prediction of aircraft departure delays via stochastic queueing or data-driven methods. Naturally, these kinds of modeling approaches can also be used to formulate decision problems. Burgain, Pinon, Feron, Clarke, and Mavris (2009) used an MDP approach to optimize the control of the push-back and taxiing processes under different levels of information regarding aircraft positions. Simaiakis, Sandberg, and Balakrishnan (2014) also considered a dynamic control problem in which decisions are made regarding time-dependent push-back rates. These push-back rates then act as inputs to an runway queueing model. They considered system states of the form where is the number of aircraft taxiing to the runway at the start of discrete time epoch and is the length of the runway queue (measured in terms of Erlang service phases). Since all of the taxiing aircraft are assumed to have reached the runway by the start of the next epoch, the Bellman equations can be written in the simplified form
where is the number of aircraft that push back during period is a single-step cost function, is the optimal cost-to-go function, is a discount factor and is the finite queue capacity. Optimal push-back policies are then obtained using DP policy iteration methods.
Badrinath and Balakrishnan (2017) studied optimal control policies in a system of two queues in tandem, with the first queue representing congestion in an airport ramp or apron area and the second representing runway congestion. Although the queueing dynamics of their model are governed by simple differential equations (with optimal push-back policies obtained by solving a deterministic nonlinear program), simulation experiments are used to test the performances of the resulting push-back policies in a stochastic environment. McFarlane and Balakrishnan (2016) considered a similar dual-queue model and also investigated the effects of using different time discretizations for decision-making purposes. Lian, Zhang, Xing, Luo, and Cheng (2019) have demonstrated the benefits of dynamic push-back control policies using data obtained from Beijing International Airport. Their study includes the use of an iterative algorithm to optimize the choice of threshold in an model for an airport taxiway queue. Chen and Solak (2020) considered a problem in which departing aircraft can be held either at a designated metering area or at the gates, and sought to optimize traffic flows at different surface locations under operational uncertainty.
Another type of surface management problem that one might consider is a gate assignment problem, in which flights must be assigned to departure gates under various 'strict' constraints (e.g. the need to avoid two flights being assigned to the same gate concurrently) and 'softer' constraints (e.g. the assignment of gates in such a way that flights operated by the same airline are located in the same physical area of the airport). Typical objectives might include minimizing the number of un-gated flights (i.e. flights assigned to the apron area), minimizing the towing operations required, or minimizing total passenger walking distance. A useful survey of such problems is provided by Bouras, Ghaleb, Suryahatmaja, and Salem (2014).
Although gate assignments are commonly affected by unforeseen disruptions, it appears that only limited attention has been given to stochastic versions of this problem. Some authors have used robust optimization, with a certain amount of 'buffer time' included in gate departure schedules in order to absorb stochastic delays (Hassounah & Steuart, 1993; Mangoubi & Mathaisel, 1985; Yan & Chang, 1998). Alternative robust optimization methods are proposed by Lim, Rodrigues, and Zhu (2005), Dorndorf, Jaehn, Lin, Ma, and Pesch (2007) and Yan and Tang (2007). In a similar vein, Narciso and Piera (2015) (see also Yan, Shieh, & Chen, 2002) have used simulation experiments to evaluate the robustness of different gate assignment policies. Seker and Noyan (2012) developed a stochastic optimization approach, with uncertainty related to flight arrival and departure times (treated as model inputs). Aoun and El Afia (2014) proposed an MDP formulation of a stochastic gate assignment problem, in which transition probabilities are based on potential conflicts (caused by operational delays) between flights assigned to the same gate.
Optimal control of airport surface operations is a problem that, if desired, can be formulated at a very microscopic level - with consideration of the availability of ground vehicles, apron stands, etc. Like other problems discussed in this paper, it is also a problem which (ideally) should not be treated in isolation, as there are obvious implications for aircraft departure times and other relevant performance indicators. In Section 4 we discuss various topics that carry implications for airport surface operations, including ground delay programs and the sequencing and scheduling of runway operations.
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S0377221720309164
Source: https://www.sciencedirect.com/topics/mathematics/departure-process
0 Response to "S T Dt Continuous Equation Queue Customer Arrival Discrete Departure Time Service"
Post a Comment