WO2008000127A1 - An apparatus and method for dispatching message - Google Patents

An apparatus and method for dispatching message Download PDF

Info

Publication number
WO2008000127A1
WO2008000127A1 PCT/CN2007/001171 CN2007001171W WO2008000127A1 WO 2008000127 A1 WO2008000127 A1 WO 2008000127A1 CN 2007001171 W CN2007001171 W CN 2007001171W WO 2008000127 A1 WO2008000127 A1 WO 2008000127A1
Authority
WO
WIPO (PCT)
Prior art keywords
scheduler
scheduling
aggregation
queues
queue
Prior art date
Application number
PCT/CN2007/001171
Other languages
French (fr)
Chinese (zh)
Inventor
Chuntao Wang
Chao Hou
Jianbing Wang
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2008000127A1 publication Critical patent/WO2008000127A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications

Definitions

  • the prior art defines a multi-layer "many-to-one" tree queue scheduling architecture, so that digital subscriber line (DSL) broadband access technology can be As shown in FIG. 1 , the tree queue scheduling architecture can truly reflect the forwarding model of a typical broadband service, so as to meet the QOS requirements of different services.
  • DSL digital subscriber line
  • the device schedules a physical port scheduler (scheduler) 101 according to the rate of the physical port.
  • the next-level scheduler owned by the physical port scheduler is scheduled according to the configured scheduling algorithm— - VP scheduler 102; the above scheduling algorithm, for example: Weighted Fair Queuing (WFQ), each virtual path (VP) scheduler is configured with different weights;
  • WFQ Weighted Fair Queuing
  • VP virtual path scheduler
  • Levelter 103 When one of the VP schedulers 102 is scheduled, the next-level scheduler owned by the VP scheduler, the virtual circuit (VC) group, is scheduled according to the configured scheduling algorithm.
  • Levelter 103 When one of the VP schedulers 102 is scheduled, the next-level scheduler owned by the VP scheduler, the virtual circuit (VC) group, is scheduled according to the configured scheduling algorithm.
  • Levelter 103 When one of the VP schedulers 102 is scheduled, the next-level scheduler owned by the VP scheduler, the virtual circuit (VC) group, is scheduled according to the configured scheduling algorithm.
  • the VC scheduler 104 is scheduled according to the configured scheduling algorithm
  • the Session scheduler 105 is scheduled according to the configured scheduling algorithm
  • the prior art may use aggregation technology to bundle one or more physical interfaces or channels to form a logical interface or channel.
  • This logical interface or channel can provide higher bandwidth for the link, and the available bandwidth of the link is the sum of the bandwidth of all physical interfaces or channels.
  • the main aggregation technologies are: Ethernet Trunk, Point-to-Point Link Multilinl PPP, Multilinlc Frame Relay, etc. Because in the case of using aggregation technology, different packets of the same VP are transmitted through different physical interfaces.
  • An embodiment of the present invention provides an apparatus and a method for scheduling a message, so that, in the case of using the aggregation technology, the packet to be sent can be hierarchically scheduled to ensure the Q0S of the service using the aggregation technology.
  • An apparatus for scheduling a message includes: an Ln scheduler according to hierarchical scheduling and an Ln-1 scheduler according to hierarchical scheduling, where each Ln scheduler aggregates multiple Ln-1 schedulers Scheduling packets to be sent, where: Each Ln scheduler corresponds to a group of aggregation queues, and each aggregation queue in the group corresponds to one Ln-1 scheduler, and the packets to be sent scheduled by the Ln-1 scheduler are saved.
  • a method for scheduling a message includes:
  • the Ln scheduler dispatches the to-be-sent packet into an aggregation queue of the corresponding group of aggregation queues; wherein each aggregation queue in the group corresponds to An Ln-1 scheduler;
  • the packets to be sent are scheduled from the corresponding aggregate queue.
  • the Ln scheduler of each aggregated transmission packet corresponds to a group of aggregation queues, and the different aggregation queues in the group respectively store the packets to be sent scheduled by the Ln-1 scheduler of the corresponding aggregated transmission message. Since the aggregation queue and the Ln-1 scheduler perform packet scheduling according to the "many-to-one" correspondence, the hierarchical queue scheduling architecture can be extended, and the hierarchical queue scheduling architecture can be applied to the occasion of using link aggregation technology. It can realize the forwarding model of the service that correctly reflects the aggregation technology, and guarantee the QOS of the service using the aggregation technology.
  • FIG. 1 is a schematic diagram of hierarchical scheduling in the prior art
  • FIG. 2 is a schematic diagram of an application scenario using an aggregation technology according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of scheduling, by using an aggregation queue, a message to be sent according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a specific network topology structure for scheduling a message to be sent in an aggregation situation according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of hierarchical scheduling of link aggregation in the network environment shown in FIG. 4.
  • the embodiment of the present invention enables the Ln scheduling of each aggregated transmission message by extending the hierarchical scheduling architecture.
  • the Ln-1 scheduler schedules the packets to be sent. Because the aggregation queue and the Ln-1 scheduler can perform message scheduling according to the "many-to-one" hierarchical scheduling correspondence, the hierarchical scheduling architecture can be applied to When the link aggregation technology is used, and the forwarding model of the service is correctly reflected, the QOS of the service is guaranteed, which is described in detail below.
  • FIG. 2 is a schematic diagram of an application scenario using an aggregation technique, and FIG. 2 illustrates a "many-to-many" mapping relationship between two adjacent layers in a hierarchical scheduling architecture. That is, there is a "many-to-many” mapping relationship between the Ln scheduler 220 and the Ln-1 scheduler 210.
  • both the Ln scheduler 220 and the Ln-1 scheduler 210 need to perform scheduling according to a specified rate.
  • a "many-to-one" mapping between the Ln scheduler and the Ln-1 scheduler It is implemented according to the scheduling method defined by the prior art.
  • there is a "one-to-many” or “many-to-many” mapping between the Ln tuner 220 and the Ln-1 scheduler 210 that is, one Ln scheduler aggregates a plurality of Ln-1 schedulers, Queue scheduling cannot be performed directly according to the scheduling method defined by the prior art.
  • an embodiment of the present invention introduces an aggregation queue (Aggregation Queue) 200 between the Ln scheduler 220 that aggregates the transmitted message and the Ln-1 scheduler 210 that aggregates the transmitted message, that is, each The Ln scheduler 220 that aggregates the transmitted packets corresponds to a set of aggregated queues and packets sent within the group.
  • Aggregation Queue aggregation queue
  • the number of aggregate queues included in each group of aggregate queues is determined by the number of Ln-1 schedulers in the aggregation group, and different aggregation queues in the group correspond to different Ln-1 schedules.
  • the aggregation queue and the Ln-1 scheduler in the aggregation queue group of the Ln scheduler may correspond to 1:1 or n:1, but no matter which correspondence, each group in the group The aggregation queue corresponds to an Ln-1 scheduler, and is not described here.
  • the Ln scheduler aggregates and sends a packet
  • the Ln-1 scheduler is aggregated to send a packet.
  • the specific scheduling packet sending process is as follows:
  • the Ln-1 scheduler identifier (or its aggregate queue identifier) is put into the corresponding aggregation queue according to the packet;
  • Figure 4 is a schematic diagram of a network topology structure using hierarchical aggregation of link aggregation, as shown in Figure 4. It can be seen that Gigabit Ethernet (GE) Trunk technology is used between the Broadband Remote Access Server (BRAS) 401 and the User Access Layer Device (LAN) 402. And implement hierarchical scheduling on the BRAS device.
  • GE Gigabit Ethernet
  • each level of the scheduler corresponds to the outgoing port of the hierarchical device, and the BRAS 410 is ready to synchronize the slave.
  • Trunk processing and traffic classification processing, trunk processing, and traffic classification are performed before the packets received on the Packet Over Synchronous optical network (Sync Hourous Domain) (POS) interface are forwarded to the GE interface.
  • POS Packet Over Synchronous optical network
  • the process determines the GE aggregation queue ID and the traffic classification queue ID. After the traffic classification process is complete, the packet and the GE aggregation queue ID are sent to the traffic classification queue.
  • BRAS 410 schedules each Lanswitch hierarchical scheduler 402 according to the configured rate (eg: 100Mbps);
  • each digital subscriber line access multiplexing subordinate to the Lanswitch hierarchical scheduler 402 will be scheduled according to a configured algorithm (e.g., a Round Robin scheduling algorithm).
  • DSLAM Digital Subscriber Line Access Multiplexer
  • each Customer Premises Equipment (CPE) hierarchical scheduler 404 is configured with different weights. Scheduling each CPE hierarchical scheduler 404 subordinate to the DSLAM hierarchical scheduler 403;
  • a packet will be fetched from the owned traffic classification queue according to the configured algorithm (for example, an absolute priority scheduling algorithm), and the GE aggregation queue according to the packet ID is entered into the corresponding GE aggregation queue 400; 5) While performing 1) ⁇ 4) scheduling, the BRAS schedules each BRAS hierarchical scheduler 401 according to the configured rate (eg: 1000 Mbps), and once one of the BRAS hierarchical schedulers 401 is scheduled, will be configured according to The algorithm (for example, the Round Robin scheduling algorithm) takes a packet from the owned GE aggregation queue 400 and sends it on the corresponding physical port, so that a complete hierarchical queue scheduling process supporting link aggregation is completed. It is.
  • the configured rate eg: 1000 Mbps
  • the Ln scheduler of each aggregated transmission packet corresponds to a group of aggregation queues, and the different queues in the group respectively save the corresponding Ln-1 scheduler of the aggregated transmission packet to be scheduled to be sent, due to aggregation.
  • the queue and the Ln-1 scheduler perform packet scheduling according to the "many-to-one" correspondence, so that the hierarchical queue scheduling architecture can be extended, and the hierarchical queue scheduling architecture can be applied to the occasion of using link aggregation technology.
  • a forwarding model that correctly reflects the business using the aggregation technology and guarantees the QOS of the service using the aggregation technology.

Abstract

An apparatus for dispatching message includes Ln dispatchers for dispatching and aggregating to transmit message under stratification and Ln-1 dispatchers for dispatching and being aggregated to transmit message under stratification. Each of the said Ln dispatcher aggregates mulitple Ln-1 dispatchers to dispatch the message to be transmitted. Each of the Ln-1 dispatchers for aggregating to transmit message is corresponding with a group of polymerized queues, and the different queues in the group store the message to be transimtted respectively, and the message is dispatched by the said Ln-1 dispatchers that are aggregated to transmit message. Further, a corresponding method for dispatching message is also disclosed. The present invention may be applied in the situation of using the aggregation technique, and it can realize dispatching the message to be transmitted in terms of the stratification, and guarantee the QOS of the service of applying the aggregation technique.

Description

一种调度报文的装置及方法  Device and method for scheduling messages
本申请要求于 2006 年 6 月 19 日提交中国专利局、 申请号为 200610036002.1、 发明名称为"一种调度报文发送的装置及方法"的中国专利申 请的优先权, 其全部内容通过引用结合在本申请中。  This application claims priority to Chinese Patent Application No. 200610036002.1, entitled "A Device and Method for Dispatching Dispatched Messages", filed on June 19, 2006, the entire contents of which are incorporated by reference. In this application.
技术领域 Technical field
本发明涉及报文调度技术,更具体的说,本发明涉及一种调度报文的装置 及方法。  The present invention relates to a message scheduling technique, and more particularly to an apparatus and method for scheduling messages.
背景技术 Background technique
在因特网分组交换的复杂环境下, 网络拥塞较为常见。拥塞使流量不能及 时获得资源,是造成服务性能下降的源头,拥塞有可能会引发以下的负面影响: 拥塞增加了报文传输的延迟和延迟抖动, 过高的延迟会引起报文重传; 拥塞使 网络的有效吞吐率降低,造成网络资源的损害; 拥塞加剧会耗费大量的网络资 源(特别是存贮资源), 不合理的资源分配甚至可能导致系统陷入资源死锁而 崩溃。然而,在分组交换以及多用户业务并存的复杂环境下,拥塞又是常见的, 网络发生拥塞时必须对其进行管理和控制, 常见的解决方法通常采用队列技 术。  In the complex environment of Internet packet switching, network congestion is more common. Congestion makes traffic unable to obtain resources in time, which is the source of service performance degradation. Congestion may cause the following negative effects: Congestion increases packet transmission delay and delay jitter, and excessive delay causes packet retransmission; congestion The effective throughput of the network is reduced, causing damage to network resources; increased congestion consumes a large amount of network resources (especially storage resources), and unreasonable resource allocation may even cause the system to fall into a resource deadlock and collapse. However, in the complex environment where packet switching and multi-user services coexist, congestion is common. When the network is congested, it must be managed and controlled. Common solutions usually use queuing technology.
目前, 根据典型宽带接入网网络拓朴结构, 现有技术定义了一种多层"多 对一"的树形队列调度架构, 使得数字用户线(Digital Subscriber Line, DSL ) 宽带接入技术可以承载有服务质量(Quality Of Service, QOS )要求的业务, 如图 1所示, 所述树形队列调度架构能够真实反映典型宽带业务的转发模型, 从而可以满足不同业务的 QOS需求。  At present, according to the topology structure of a typical broadband access network, the prior art defines a multi-layer "many-to-one" tree queue scheduling architecture, so that digital subscriber line (DSL) broadband access technology can be As shown in FIG. 1 , the tree queue scheduling architecture can truly reflect the forwarding model of a typical broadband service, so as to meet the QOS requirements of different services.
具体的, 现有技术中层次化队列调度的调度过程如下:  Specifically, the scheduling process of the hierarchical queue scheduling in the prior art is as follows:
1 )首先设备根据物理端口的速率,调度物理端口调度器( scheduler ) 101 , 一旦物理端口调度器被调度到,将根据配置的调度算法调度该物理端口调度器 所拥有的下一级调度器—— VP调度器 102; 上述的调度算法, 例如: 加权公 平队列 ( Weighted Fair Queuing, WFQ ) , 每个虚通道( Virtual Path, VP )调 度器被配置不同的权重;  1) First, the device schedules a physical port scheduler (scheduler) 101 according to the rate of the physical port. Once the physical port scheduler is scheduled, the next-level scheduler owned by the physical port scheduler is scheduled according to the configured scheduling algorithm— - VP scheduler 102; the above scheduling algorithm, for example: Weighted Fair Queuing (WFQ), each virtual path (VP) scheduler is configured with different weights;
2 ) 当其中的一个 VP调度器 102被调度到时, 将根据配置的调度算法, 调 度该 VP调度器所拥有的下一级调度器——虚通路( Virtual Circuit, VC )组调 度器 103; 2) When one of the VP schedulers 102 is scheduled, the next-level scheduler owned by the VP scheduler, the virtual circuit (VC) group, is scheduled according to the configured scheduling algorithm. Levelter 103;
3 ) 当其中的一个 VC組调度器 103被调度到时, 将根据配置的调度算法, 调度该 VC组调度器所拥有的下一级调度器—— VC调度器 104;  3) When one of the VC group schedulers 103 is scheduled, the next scheduled scheduler owned by the VC group scheduler, the VC scheduler 104, is scheduled according to the configured scheduling algorithm;
4 ) 当其中的一个 VC调度器 104被调度到时, 将根据配置的调度算法, 调 度该 VC调度器所拥有的下一级调度器—— ^话 ( Session )调度器 105;  4) When one of the VC schedulers 104 is scheduled, the next-stage scheduler owned by the VC scheduler, the Session scheduler 105, is scheduled according to the configured scheduling algorithm;
5 )最后当其中的一个 Session调度器 105被调度到时,将根据配置的调度算 法, 调度该 Session调度器 105所拥有的队列——流分类队列, 并从流分类队列 中调度出一个报文, 用于在物理端口上发送。  5) Finally, when one of the Session Schedulers 105 is scheduled, the queue owned by the Session Scheduler 105, the flow classification queue, will be scheduled according to the configured scheduling algorithm, and a message will be dispatched from the flow classification queue. , used to send on a physical port.
虽然现有技术定义了层次化队列调度模型,但实际应用时,并不需要严格 遵守上述调度过程, 因为层次化调度的层次不是固定的, 需要由网络的拓朴层 次确定, 并且每一层调度器的作用也需要根据网络结构进行映射。  Although the prior art defines a hierarchical queue scheduling model, in actual application, the above scheduling process does not need to be strictly adhered to, because the hierarchical level of hierarchical scheduling is not fixed, and needs to be determined by the topology level of the network, and each layer scheduling The role of the device also needs to be mapped according to the network structure.
另一方面,当链路需要的带宽超过单一的物理接口或通道可以提供的带宽 时,现有技术中可以采用聚合技术把一个或多个物理接口或通道捆绑在一起形 成一个逻辑接口或通道, 通过这个逻辑接口或通道可以为链路提供更高的带 宽, 并且链路可用的带宽是所有物理接口或通道的带宽总和, 目前主要的聚合 技术有: 以太网聚合(Ethernet Trunk ) 、 点对点链路聚合 ( Multilinl PPP ) 、 帧中继链路聚合 ( Multilinlc Frame Relay )等。 因为在使用聚合技术的场合, 同一 VP的不同报文会通过不同的物理接口来传 输, VP和物理接口之间是"一对多,,或 "多对多"的映射关系, 而不是 "多对一" 正确反映使用聚合技术的业务的转发模型, 不能保证使用聚合技术的业务的 On the other hand, when the bandwidth required by the link exceeds the bandwidth that can be provided by a single physical interface or channel, the prior art may use aggregation technology to bundle one or more physical interfaces or channels to form a logical interface or channel. This logical interface or channel can provide higher bandwidth for the link, and the available bandwidth of the link is the sum of the bandwidth of all physical interfaces or channels. Currently, the main aggregation technologies are: Ethernet Trunk, Point-to-Point Link Multilinl PPP, Multilinlc Frame Relay, etc. Because in the case of using aggregation technology, different packets of the same VP are transmitted through different physical interfaces. Between the VP and the physical interface is a "one-to-many, or "many-to-many" mapping relationship instead of "multiple". For a "transfer model that correctly reflects the business using aggregation technology, it cannot guarantee the business of using aggregation technology.
QOS0 QOS 0
发明内容 Summary of the invention
本发明实施例提供一种调度报文的装置及方法,以在使用聚合技术的场合 也可实现层次化调度待发送的报文, 保证使用聚合技术的业务的 Q0S。  An embodiment of the present invention provides an apparatus and a method for scheduling a message, so that, in the case of using the aggregation technology, the packet to be sent can be hierarchically scheduled to ensure the Q0S of the service using the aggregation technology.
本发明实施例提供的一种调度报文的装置, 包括按照层次化调度的 Ln调 度器及按照层次化调度的 Ln-1调度器, 所述每个 Ln调度器聚合多个 Ln-1调 度器调度待发送的报文, 其中: 所述每个 Ln调度器对应一組聚合队列, 组内每个聚合队列对应一个 Ln-1 调度器, 保存由所述 Ln-1调度器调度的待发送的报文。 An apparatus for scheduling a message according to an embodiment of the present invention includes: an Ln scheduler according to hierarchical scheduling and an Ln-1 scheduler according to hierarchical scheduling, where each Ln scheduler aggregates multiple Ln-1 schedulers Scheduling packets to be sent, where: Each Ln scheduler corresponds to a group of aggregation queues, and each aggregation queue in the group corresponds to one Ln-1 scheduler, and the packets to be sent scheduled by the Ln-1 scheduler are saved.
相应地, 本发明实施例提供的一种调度报文的方法, 包括:  Correspondingly, a method for scheduling a message according to an embodiment of the present invention includes:
按照层次化调度报文发送, 轮循到 Ln调度器调度时, 所述 Ln调度器将 待发送报文调度入对应的一组聚合队列中的一个聚合队列; 其中,组内每个聚 合队列对应一个 Ln-1调度器;  According to the hierarchical scheduling message transmission, when the round-robin scheduling is performed by the Ln scheduler, the Ln scheduler dispatches the to-be-sent packet into an aggregation queue of the corresponding group of aggregation queues; wherein each aggregation queue in the group corresponds to An Ln-1 scheduler;
轮循到 Ln-1调度器调度时, 从与其对应的聚合队列调度待发送的报文。 本发明实施例中每个聚合发送报文的 Ln调度器对应一组聚合队列, 組内 不同聚合队列分别保存其对应的被聚合发送报文的 Ln-1调度器调度的待发送 的报文, 由于聚合队列和 Ln-1调度器之间按照"多对一"的对应关系进行报文 调度,从而可扩展层次化队列调度架构,使层次化队列调度架构能够应用到使 用链路聚合技术的场合,可实现正确反映使用聚合技术的业务的转发模型,保 证使用聚合技术的业务的 QOS。  When the round robin is scheduled to the Ln-1 scheduler, the packets to be sent are scheduled from the corresponding aggregate queue. In the embodiment of the present invention, the Ln scheduler of each aggregated transmission packet corresponds to a group of aggregation queues, and the different aggregation queues in the group respectively store the packets to be sent scheduled by the Ln-1 scheduler of the corresponding aggregated transmission message. Since the aggregation queue and the Ln-1 scheduler perform packet scheduling according to the "many-to-one" correspondence, the hierarchical queue scheduling architecture can be extended, and the hierarchical queue scheduling architecture can be applied to the occasion of using link aggregation technology. It can realize the forwarding model of the service that correctly reflects the aggregation technology, and guarantee the QOS of the service using the aggregation technology.
附图说明 DRAWINGS
图 1是现有技术层次化调度示意图;  1 is a schematic diagram of hierarchical scheduling in the prior art;
图 2是本发明实施例一种使用聚合技术的应用场景示意图;  2 is a schematic diagram of an application scenario using an aggregation technology according to an embodiment of the present invention;
图 3是本发明实施例使用聚合队列调度待发送 4艮文的示意图;  3 is a schematic diagram of scheduling, by using an aggregation queue, a message to be sent according to an embodiment of the present invention;
图 4是本发明实施例聚合情况下调度待发送报文的一种具体的网络拓朴 结构示意图;  4 is a schematic diagram of a specific network topology structure for scheduling a message to be sent in an aggregation situation according to an embodiment of the present invention;
图 5是图 4所示网络环境下进行链路聚合的层次化调度示意图。  FIG. 5 is a schematic diagram of hierarchical scheduling of link aggregation in the network environment shown in FIG. 4.
具体实施方式 detailed description
本发明实施例通过扩展层次化调度架构, 使每个聚合发送报文的 Ln调度  The embodiment of the present invention enables the Ln scheduling of each aggregated transmission message by extending the hierarchical scheduling architecture.
Ln-1调度器调度的待发送的报文, 由于聚合队列和 Ln-1调度器之间可以按照 "多对一"的层次化调度对应关系进行报文调度,使层次化调度架构能够应用到 使用链路聚合技术的场合, 并正确反映业务的转发模型, 从而保证业务的 QOS, 下面详细说明。 The Ln-1 scheduler schedules the packets to be sent. Because the aggregation queue and the Ln-1 scheduler can perform message scheduling according to the "many-to-one" hierarchical scheduling correspondence, the hierarchical scheduling architecture can be applied to When the link aggregation technology is used, and the forwarding model of the service is correctly reflected, the QOS of the service is guaranteed, which is described in detail below.
在使用链路聚合技术的场合,同一链路的不同报文会通过下层不同的物理 接口或通道来传输, 为了能够保证业务的 QOS, 需要能够在层次化聚合队列 调度架构中体现出这种"一对多 "或"多对多"的映射关系。 参考图 2, 图 2是一 种使用聚合技术的应用场景示意图,图 2中示出了层次化调度架构中相邻两层 之间"多对多 "映射关系的情况。即 Ln调度器 220和 Ln-1调度器 210之间为 "多 对多"映射关系。 When the link aggregation technology is used, different packets of the same link are transmitted through different physical interfaces or channels of the lower layer. To ensure the QOS of the service, you need to be able to aggregate the queues in a hierarchical manner. This "one-to-many" or "many-to-many" mapping relationship is reflected in the scheduling architecture. Referring to FIG. 2, FIG. 2 is a schematic diagram of an application scenario using an aggregation technique, and FIG. 2 illustrates a "many-to-many" mapping relationship between two adjacent layers in a hierarchical scheduling architecture. That is, there is a "many-to-many" mapping relationship between the Ln scheduler 220 and the Ln-1 scheduler 210.
在层次化调度中 , Ln调度器 220和 Ln-1调度器 210都需要按照指定的速 率进行调度, 在 Ln调度器和 Ln-1调度器之间是 "多对一"映射的情况下, 可以 根据现有技术定义的调度方式来实现。但是对于在 Ln调泉器 220和 Ln-1调度 器 210之间是 "一对多"或"多对多 "映射的情况下, 即一个 Ln调度器聚合对应 多个 Ln-1调度器, 则不能直接按照现有技术定义的调度方法进行队列调度。  In the hierarchical scheduling, both the Ln scheduler 220 and the Ln-1 scheduler 210 need to perform scheduling according to a specified rate. In the case of a "many-to-one" mapping between the Ln scheduler and the Ln-1 scheduler, It is implemented according to the scheduling method defined by the prior art. However, in the case where there is a "one-to-many" or "many-to-many" mapping between the Ln tuner 220 and the Ln-1 scheduler 210, that is, one Ln scheduler aggregates a plurality of Ln-1 schedulers, Queue scheduling cannot be performed directly according to the scheduling method defined by the prior art.
为了解决这个问题, 参考图 3, 本发明实施例在聚合发送报文的 Ln调度 器 220和被聚合发送报文的 Ln-1调度器 210之间引入了聚合队列( Aggregation Queue ) 200, 即每个聚合发送报文的 Ln调度器 220对应一组聚合队列, 组内 发送的报文。  To solve this problem, referring to FIG. 3, an embodiment of the present invention introduces an aggregation queue (Aggregation Queue) 200 between the Ln scheduler 220 that aggregates the transmitted message and the Ln-1 scheduler 210 that aggregates the transmitted message, that is, each The Ln scheduler 220 that aggregates the transmitted packets corresponds to a set of aggregated queues and packets sent within the group.
对于每个 Ln调度器对应的一组聚合队列,每组聚合队列包含的聚合队列数 由聚合组内 Ln-1调度器的数目决定,并且组内不同聚合队列分别对应到不同的 Ln-1调度器,具体实现时, 所述 Ln调度器的聚合队列組中聚合队列与 Ln-1调度 器可以按照 1 : 1对应, 也可以按照 n: 1对应, 但是无论是哪种对应关系, 组内 每个聚合队列对应一个 Ln-1调度器, 这里不再赘述。  For each group of aggregate queues corresponding to each Ln scheduler, the number of aggregate queues included in each group of aggregate queues is determined by the number of Ln-1 schedulers in the aggregation group, and different aggregation queues in the group correspond to different Ln-1 schedules. In the specific implementation, the aggregation queue and the Ln-1 scheduler in the aggregation queue group of the Ln scheduler may correspond to 1:1 or n:1, but no matter which correspondence, each group in the group The aggregation queue corresponds to an Ln-1 scheduler, and is not described here.
下面说明本发明实施例调度报文的方法, 本实施例中 Ln调度器聚合发送 报文, Ln-1调度器则被聚合发送报文, 具体调度报文发送流程如下:  The following describes the method for scheduling a packet in the embodiment of the present invention. In this embodiment, the Ln scheduler aggregates and sends a packet, and the Ln-1 scheduler is aggregated to send a packet. The specific scheduling packet sending process is as follows:
1) 启动层次化队列调度, 轮循到聚合发送报文的 Ln调度器调度时, Ln调 度器根据指定的速率进行调度;  1) Start hierarchical queue scheduling, and when the round robin schedules the Ln scheduler to send packets, the Ln scheduler schedules according to the specified rate;
2) Ln调度器成功调度到一个报文后,根据报文的 Ln-1调度器标识(或其聚 合队列标识)把报文放入对应的聚合队列中;  2) After the Ln scheduler successfully dispatches a packet, the Ln-1 scheduler identifier (or its aggregate queue identifier) is put into the corresponding aggregation queue according to the packet;
3) 层次化队列调度轮循到被聚合发送报文的 Ln-1调度器时, Ln-1调度器根 据指定的速率进行调度, 从其对应的聚合队列调度待发送的报文,本发 明实施例中由于聚合队列和 Ln-1调度器之间是 "多对一"的对应关系, 因 此, 可按照层次化队列调度方式继续调度报文, 这里不再赘述。 上面描述了"多对多 "映射情况, "一对多"映射是 "多对多 "映射的一个特 例, 根据上述流程, 本领域的技术人员可以很容易实现, 这里不再赘述。 3) When the hierarchical queue scheduling is rotated to the Ln-1 scheduler that is configured to send the packet, the Ln-1 scheduler schedules the packet to be sent according to the specified aggregation queue, and the present invention implements the packet to be sent. In the example, the correspondence between the aggregation queue and the Ln-1 scheduler is "many-to-one". Therefore, the packets can be scheduled according to the hierarchical queue scheduling mode, and are not described here. The "many-to-many" mapping is described above. The "one-to-many" mapping is a special case of the "many-to-many" mapping. According to the above process, those skilled in the art can easily implement it, and details are not described herein.
下面以一个实际应用场景来说明链路聚合场景下的层次化队列调度过程, 如图 4所示为一种使用链路聚合的层次化队列调度的一种网絡拓朴结构示意 图, 从图 4中可以看出在宽带远程接入月良务器 ( Broadband Remote Access Server, BRAS ) 401和用户接入层设备 ( Lanswitch ) 402之间采用千兆以太网 ( Gigabit Ethernet, GE )聚合( Trunk )技术, 并且在 BRAS设备上实现层次化 调度功能。  The following is a practical application scenario to illustrate the hierarchical queue scheduling process in the link aggregation scenario. Figure 4 is a schematic diagram of a network topology structure using hierarchical aggregation of link aggregation, as shown in Figure 4. It can be seen that Gigabit Ethernet (GE) Trunk technology is used between the Broadband Remote Access Server (BRAS) 401 and the User Access Layer Device (LAN) 402. And implement hierarchical scheduling on the BRAS device.
通过对上面网络拓朴结构的分析,可以设计出支持链路聚合的层次化调度 模型, 如图 5所示, 每个层次的调度器对应该层次设备的出端口, BRAS 410在 准备把从同步光纤网 /同步数字体系上的分组 ( Packet Over Synchronous optical network /Sync hronous distal hierarchy, POS )接口上接收的艮文转发到 GE接口 上之前, 将进行 Trunk处理和流分类处理, Trunk处理和流分类处理确定了 GE 聚合队列 ID和流分类队列 ID, 当完成流分类处理后, 报文和 GE聚合队列 ID— 起入流分类队列。  By analyzing the topological structure of the above network, a hierarchical scheduling model supporting link aggregation can be designed. As shown in Figure 5, each level of the scheduler corresponds to the outgoing port of the hierarchical device, and the BRAS 410 is ready to synchronize the slave. Trunk processing and traffic classification processing, trunk processing, and traffic classification are performed before the packets received on the Packet Over Synchronous optical network (Sync Hourous Domain) (POS) interface are forwarded to the GE interface. The process determines the GE aggregation queue ID and the traffic classification queue ID. After the traffic classification process is complete, the packet and the GE aggregation queue ID are sent to the traffic classification queue.
本实施例中 BRAS进行层次化队列调度的处理过程如下:  The processing procedure of the BRAS for hierarchical queue scheduling in this embodiment is as follows:
1 )首先: BRAS 410根据配置的速率(如: 100Mbps ), 调度每个 Lanswitch 层次调度器 402;  1) First: BRAS 410 schedules each Lanswitch hierarchical scheduler 402 according to the configured rate (eg: 100Mbps);
2 )一旦其中一个 Lanswtich层次调度器 402被调度到, 将根据配置的算法 (例如: 轮循 (Round Robin )调度算法), 调度该 Lanswitch层次调度器 402下 属的每一个数字用户线路接入复用器 ( Digital Subscriber Line Access Multiplexer, DSLAM )层次调度器 403;  2) Once one of the Lanswtich hierarchical schedulers 402 is scheduled, each digital subscriber line access multiplexing subordinate to the Lanswitch hierarchical scheduler 402 will be scheduled according to a configured algorithm (e.g., a Round Robin scheduling algorithm). (Digital Subscriber Line Access Multiplexer, DSLAM) hierarchical scheduler 403;
3 ) 当其中一个 DSLAM层次调度器 403被调度到, 将才艮据配置的算法, 例 如: WFQ调度算法, 每个客户端设备( Customer Premises Equipment , CPE ) 层次调度器 404被配置不同的权重, 调度该 DSLAM层次调度器 403下属的每一 个 CPE层次调度器 404;  3) When one of the DSLAM hierarchical schedulers 403 is scheduled, the configured algorithm, such as the WFQ scheduling algorithm, each Customer Premises Equipment (CPE) hierarchical scheduler 404 is configured with different weights. Scheduling each CPE hierarchical scheduler 404 subordinate to the DSLAM hierarchical scheduler 403;
4 ) 当其中一个 CPE层次调度器 404被调度到, 将根据配置的算法 (例如: 绝对优先级调度算法), 从所拥有的流分类队列中取出一个报文, 并且根据报 文的 GE聚合队列 ID入对应的 GE聚合队列 400; 5 )在进行1 ) ~ 4 )调度的同时, BRAS根据配置的速率(如: 1000Mbps ), 调度每个 BRAS层次调度器 401 , —旦其中一个 BRAS层次调度器 401被调度到, 将根据配置的算法(例如: Round Robin调度算法), 从所拥有的 GE聚合队列 400中取出一个报文, 并在相应的物理端口上发送出去, 这样一个完整的支持 链路聚合的层次化队列调度过程就完成了。 4) When one of the CPE hierarchical schedulers 404 is scheduled, a packet will be fetched from the owned traffic classification queue according to the configured algorithm (for example, an absolute priority scheduling algorithm), and the GE aggregation queue according to the packet ID is entered into the corresponding GE aggregation queue 400; 5) While performing 1) ~ 4) scheduling, the BRAS schedules each BRAS hierarchical scheduler 401 according to the configured rate (eg: 1000 Mbps), and once one of the BRAS hierarchical schedulers 401 is scheduled, will be configured according to The algorithm (for example, the Round Robin scheduling algorithm) takes a packet from the owned GE aggregation queue 400 and sends it on the corresponding physical port, so that a complete hierarchical queue scheduling process supporting link aggregation is completed. It is.
本发明实施例中每个聚合发送报文的 Ln调度器对应一组聚合队列, 组内 不同队列分别保存其对应的被聚合发送报文的 Ln-1 调度器调度待发送的报 文, 由于聚合队列和 Ln-1调度器之间按照"多对一"的对应关系进行报文调度, 从而可扩展层次化队列调度架构,使层次化队列调度架构能够应用到使用链路 聚合技术的场合,可实现正确反映使用聚合技术的业务的转发模型,保证使用 聚合技术的业务的 QOS。  In the embodiment of the present invention, the Ln scheduler of each aggregated transmission packet corresponds to a group of aggregation queues, and the different queues in the group respectively save the corresponding Ln-1 scheduler of the aggregated transmission packet to be scheduled to be sent, due to aggregation. The queue and the Ln-1 scheduler perform packet scheduling according to the "many-to-one" correspondence, so that the hierarchical queue scheduling architecture can be extended, and the hierarchical queue scheduling architecture can be applied to the occasion of using link aggregation technology. A forwarding model that correctly reflects the business using the aggregation technology and guarantees the QOS of the service using the aggregation technology.
以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局 限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易 想到的变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发明的保护 范围应该以权利要求书的保护范围为准。  The above is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of changes or within the technical scope disclosed by the present invention. Alternatives are intended to be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the claims.

Claims

权 利 要 求 Rights request
1、一种调度报文的装置, 包括按照层次化调度的 Ln调度器及按照层次化 调度的 Ln-1调度器, 所述每个 Ln调度器聚合多个 Ln-1调度器调度待发送的 报文, 其特征在于,  A device for scheduling messages, comprising: an Ln scheduler according to hierarchical scheduling and an Ln-1 scheduler according to hierarchical scheduling, wherein each Ln scheduler aggregates multiple Ln-1 schedulers to be scheduled to be sent. Message, which is characterized by
所述每个 Ln调度器对应一组聚合队列; 组内每个聚合队列对应一个 Ln-1 调度器, 保存由所述 Ln-1调度器调度的待发送的报文。  Each Ln scheduler corresponds to a group of aggregation queues; each aggregation queue in the group corresponds to an Ln-1 scheduler, and the packets to be sent scheduled by the Ln-1 scheduler are saved.
.  .
2、 根据权利要求 1 所述的调度报文的装置, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列的数目由 Ln-1调度器的数目决定。 2. The apparatus for scheduling messages according to claim 1, wherein the number of aggregation queues in a group of aggregation queues corresponding to each Ln scheduler is determined by the number of Ln-1 schedulers.
3、 根据权利要求 2 所述的调度报文的装置, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列的数目与 Ln-1调度器的数目按照 1: 1 对应。  The device for scheduling messages according to claim 2, wherein the number of aggregate queues in the group of aggregate queues corresponding to each Ln scheduler is corresponding to the number of Ln-1 schedulers according to 1:1. .
4、 根据权利要求 2 所述的调度报文的装置, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列的数目与 Ln-1调度器的数目按照 n: 1 对应。  The apparatus for scheduling messages according to claim 2, wherein the number of aggregate queues in a group of aggregate queues corresponding to each Ln scheduler and the number of Ln-1 schedulers are in accordance with n: 1 .
5、根据权利要求 1所述的调度报文的装置, 其特征在于, 所述待发送的报 文是对应的 Ln调度器根据待发送报文的 Ln-1调度器标识或聚合队列标识放入 聚合队列的。  The device for scheduling a message according to claim 1, wherein the packet to be sent is a corresponding Ln scheduler according to an Ln-1 scheduler identifier or an aggregate queue identifier of a packet to be sent. Aggregate the queue.
6、 根据权利要求 1-5任一项所述的调度报文的装置, 其特征在于, 所述 Ln调度器为虚通道调度器, 所述 Ln-1调度器为物理端口调度器。  The apparatus for scheduling messages according to any one of claims 1-5, wherein the Ln scheduler is a virtual channel scheduler, and the Ln-1 scheduler is a physical port scheduler.
7、 一种调度报文的方法, 其特征在于, 包括:  A method for scheduling a message, comprising:
按照层次化调度待发送的报文, 轮循到 Ln调度器调度时, 所述 Ln调度 器将待发送报文调度入对应的一组聚合队列中的一个聚合队列; 其中,组内每 个聚合队列对应一个 Ln-1调度器;  The Ln scheduler schedules the to-be-sent packet into an aggregation queue of the corresponding group of aggregation queues, where each aggregation in the group is scheduled according to the hierarchical scheduling of the packets to be sent. The queue corresponds to an Ln-1 scheduler;
轮循到所述 Ln-1调度器调度时, 从与其对应的聚合队列调度待发送的报 文。  When the round-robin scheduling is performed to the Ln-1 scheduler, the message to be sent is scheduled from the corresponding aggregate queue.
8、 才艮据权利要求 7 所述的调度报文的方法, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列数目由 Ln- 1调度器的数目决定。  The method for scheduling messages according to claim 7, wherein the number of aggregate queues in a group of aggregate queues corresponding to each Ln scheduler is determined by the number of Ln-1 schedulers.
9、 根据权利要求 8 所述的调度报文的方法, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列的数目与 Ln-1调度器的数目的对应关 系为 1 : 1。 The method for scheduling a message according to claim 8, wherein the number of the aggregated queues in the group of aggregated queues corresponding to each Ln scheduler is corresponding to the number of Ln-1 schedulers. The system is 1:1.
10、 根据权利要求 8所述的调度报文的方法, 其特征在于, 所述每个 Ln 调度器对应的一组聚合队列中聚合队列的数目与 Ln-1调度器的数目的对应关 系为 n: 1。  The method for scheduling a message according to claim 8, wherein the correspondence between the number of aggregate queues in the group of aggregate queues corresponding to each Ln scheduler and the number of Ln-1 schedulers is n. : 1.
11、 根据权利要求 7所述的调度报文发送的方法, 其特征在于, 所述 Ln 调度器将待发送报文调度入对应的一组聚合队列中的一个聚合队列是根据待 发送报文的 Ln-1调度器标识或聚合队列标识进行的。  The method for sending a scheduling message according to claim 7, wherein the Ln scheduler schedules a packet to be sent into an aggregation queue of a corresponding group of aggregation queues according to a packet to be sent. Ln-1 scheduler ID or aggregate queue ID.
12、 根据权利要求 7-11任一项所述的调度报文的方法, 其特征在于, 所 述 Ln调度器为虚通道调度器, 所述 Ln-1调度器为物理端口调度器。  The method for scheduling messages according to any one of claims 7 to 11, wherein the Ln scheduler is a virtual channel scheduler, and the Ln-1 scheduler is a physical port scheduler.
13、 根据权利要求 7所述的调度报文的方法, 其特征在于, 所述 Ln调度 器和 Ln-1调度器都按照指定的速率进行调度。  The method for scheduling messages according to claim 7, wherein the Ln scheduler and the Ln-1 scheduler both perform scheduling at a specified rate.
PCT/CN2007/001171 2006-06-19 2007-04-11 An apparatus and method for dispatching message WO2008000127A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNA2006100360021A CN1968186A (en) 2006-06-19 2006-06-19 Message sending scheduling apparatus and method
CN200610036002.1 2006-06-19

Publications (1)

Publication Number Publication Date
WO2008000127A1 true WO2008000127A1 (en) 2008-01-03

Family

ID=38076743

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2007/001171 WO2008000127A1 (en) 2006-06-19 2007-04-11 An apparatus and method for dispatching message

Country Status (2)

Country Link
CN (1) CN1968186A (en)
WO (1) WO2008000127A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8171495B2 (en) 2008-05-29 2012-05-01 Microsoft Corporation Queue dispatch using deferred acknowledgement

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102546395B (en) * 2011-12-14 2018-04-27 中兴通讯股份有限公司 Business scheduling method and device based on L2VPN networks
CN102685130A (en) * 2012-05-10 2012-09-19 苏州阔地网络科技有限公司 Dispatching control method and system for cloud conference
CN102957628A (en) * 2012-12-12 2013-03-06 福建星网锐捷网络有限公司 Method, device and access device for packet polymerization
CN104348751B (en) 2013-07-31 2019-03-12 中兴通讯股份有限公司 Virtual output queue authorization management method and device
CN104348750B (en) * 2013-07-31 2019-07-26 中兴通讯股份有限公司 The implementation method and device of QoS in OpenFlow network
CN108259355B (en) * 2014-12-30 2022-03-11 华为技术有限公司 Message forwarding method and device
CN110750367B (en) * 2019-09-30 2023-03-17 超聚变数字技术有限公司 Method, system and related equipment for queue communication

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1611035A (en) * 2001-04-13 2005-04-27 飞思卡尔半导体公司 Manipulating data streams in data stream processors
CN1631008A (en) * 2001-07-13 2005-06-22 艾利森公司 Method and apparatus for scheduling message processing
GB2420430A (en) * 2004-11-23 2006-05-24 Inst Information Industry Fast developing and testing of communication protocol software

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1611035A (en) * 2001-04-13 2005-04-27 飞思卡尔半导体公司 Manipulating data streams in data stream processors
CN1631008A (en) * 2001-07-13 2005-06-22 艾利森公司 Method and apparatus for scheduling message processing
GB2420430A (en) * 2004-11-23 2006-05-24 Inst Information Industry Fast developing and testing of communication protocol software

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8171495B2 (en) 2008-05-29 2012-05-01 Microsoft Corporation Queue dispatch using deferred acknowledgement

Also Published As

Publication number Publication date
CN1968186A (en) 2007-05-23

Similar Documents

Publication Publication Date Title
US7180855B1 (en) Service interface for QoS-driven HPNA networks
WO2008000127A1 (en) An apparatus and method for dispatching message
EP2695338B1 (en) Port and priority based flow control mechanism
US7532627B2 (en) Wideband upstream protocol
US7583700B1 (en) Service interface for QoS-driven HPNA networks
US7523188B2 (en) System and method for remote traffic management in a communication network
US7855960B2 (en) Traffic shaping method and device
KR101333507B1 (en) Reporting multicast bandwidth consumption between a multicast replicating node and a traffic scheduling node
WO2010000191A1 (en) Method and device for packet scheduling
WO2015014133A1 (en) Qos implementation method and apparatus in openflow network
WO2006069527A1 (en) A method, a apparatus and a network thereof for ensuring the service qos of broadband access
WO2014173367A2 (en) Qos implementation method, system, device and computer storage medium
WO2008125027A1 (en) Business dispatching method and network concourse device thereof
WO2012171461A1 (en) Method and device for forwarding packet
Kaur et al. Core-stateless guaranteed throughput networks
WO2011144100A2 (en) Service scheduling method and apparatus under multiple broadband network gateways
WO2007085159A1 (en) QoS CONTROL METHOD AND SYSTEM
Rajanna et al. Xco: Explicit coordination for preventing congestion in data center ethernet
JP4028302B2 (en) Packet relay method, relay device, and network system using the relay device
Domżał et al. Efficient congestion control mechanism for flow‐aware networks
Chen et al. On meeting deadlines in datacenter networks
JP2004104708A (en) Fair access network system
Angelopoulos et al. Using a multiple priority reservation MAC to support differentiated services over HFC systems
Sam et al. Quality-of-service provisioning using hierarchical scheduling in optical burst switched network
JP2004064303A (en) Method for accommodating atm access channel and gateway device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07720744

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 07720744

Country of ref document: EP

Kind code of ref document: A1